Enterprise Information Systems:
Concepts, Methodologies, Tools and Applications Information Resources Management Association USA
Volume I
Business science reference Hershey • New York
Director of Editorial Content: Director of Book Publications: Acquisitions Editor: Development Editor: Publishing Assistant: Typesetters: Production Editor: Cover Design:
Kristin Klinger Julia Mosemann Lindsay Johnston Devvin Earnest Deanna Jo Zombro Michael Brehm, Casey Conapitski, Keith Glazewski, Natalie Pronio, Milan Vracarich, JR., Deanna Zombro Jamie Snavely Lisa Tosheff
Published in the United States of America by Information Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.igi-global.com/reference and in the United Kingdom by Information Science Reference (an imprint of IGI Global) 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 0609 Web site: http://www.eurospanbookstore.com Copyright © 2011 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark.
Enterprise information systems : concepts, methodologies, tools and applications / Information Resources Management Association, editor. p. cm. Includes bibliographical references and index. Summary: "This three-volume collection provides a complete assessment of the latest developments in enterprise information systems research, including development, design, and emerging methodologies"--Provided by publisher. ISBN 978-1-61692-852-0 (hardcover) -- ISBN 978-1-61692-853-7 (ebook) 1. Management information systems. 2. Information technology--Management. 3. Electronic commerce. 4. Business enterprises--Computer networks. I. Information Resources Management Association. HD30.213.E582 2011 658.4'038011--dc22 2010032232 British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book set is original material. The views expressed in this book are those of the authors, but not necessarily of the publisher.
Editor-in-Chief Mehdi Khosrow-Pour, DBA Editor-in-Chief Contemporary Research in Information Science and Technology, Book Series
Associate Editors Steve Clarke University of Hull, UK Murray E. Jennex San Diego State University, USA Annie Becker Florida Institute of Technology USA Ari-Veikko Anttiroiko University of Tampere, Finland
Editorial Advisory Board Sherif Kamel American University in Cairo, Egypt In Lee Western Illinois University, USA Jerzy Kisielnicki Warsaw University, Poland Keng Siau University of Nebraska-Lincoln, USA Amar Gupta Arizona University, USA Craig van Slyke University of Central Florida, USA John Wang Montclair State University, USA Vishanth Weerakkody Brunel University, UK
Additional Research Collections found in the “Contemporary Research in Information Science and Technology” Book Series Data Mining and Warehousing: Concepts, Methodologies, Tools, and Applications John Wang, Montclair University, USA • 6-volume set • ISBN 978-1-60566-056-1 Electronic Business: Concepts, Methodologies, Tools, and Applications In Lee, Western Illinois University • 4-volume set • ISBN 978-1-59904-943-4 Electronic Commerce: Concepts, Methodologies, Tools, and Applications S. Ann Becker, Florida Institute of Technology, USA • 4-volume set • ISBN 978-1-59904-943-4 Electronic Government: Concepts, Methodologies, Tools, and Applications Ari-Veikko Anttiroiko, University of Tampere, Finland • 6-volume set • ISBN 978-1-59904-947-2 Knowledge Management: Concepts, Methodologies, Tools, and Applications Murray E. Jennex, San Diego State University, USA • 6-volume set • ISBN 978-1-59904-933-5 Information Communication Technologies: Concepts, Methodologies, Tools, and Applications Craig Van Slyke, University of Central Florida, USA • 6-volume set • ISBN 978-1-59904-949-6 Intelligent Information Technologies: Concepts, Methodologies, Tools, and Applications Vijayan Sugumaran, Oakland University, USA • 4-volume set • ISBN 978-1-59904-941-0 Information Security and Ethics: Concepts, Methodologies, Tools, and Applications Hamid Nemati, The University of North Carolina at Greensboro, USA • 6-volume set • ISBN 978-1-59904-937-3 Medical Informatics: Concepts, Methodologies, Tools, and Applications Joseph Tan, Wayne State University, USA • 4-volume set • ISBN 978-1-60566-050-9 Mobile Computing: Concepts, Methodologies, Tools, and Applications David Taniar, Monash University, Australia • 6-volume set • ISBN 978-1-60566-054-7 Multimedia Technologies: Concepts, Methodologies, Tools, and Applications Syed Mahbubur Rahman, Minnesota State University, Mankato, USA • 3-volume set • ISBN 978-1-60566-054-7 Virtual Technologies: Concepts, Methodologies, Tools, and Applications Jerzy Kisielnicki, Warsaw University, Poland • 3-volume set • ISBN 978-1-59904-955-7
Free institution-wide online access with the purchase of a print collection!
information science reference Hershey • New York
Order online at www.igi-global.com or call 717-533-8845 ext.100 Mon–Fri 8:30am–5:00 pm (est) or fax 24 hours a day 717-533-7115
List of Contributors
Aalmink, Jan \ Carl von Ossietzky University Oldenburg, Germany .............................................. 134 Aguirre, Jose-Luis \ Tecnologico de Monterrey, Mexico ............................................................... 1836 Ajaefobi, Joseph \ Loughborough University, UK ........................................................................... 370 Aklouf, Youcef \ University of Science and Technology, Algeria ..................................................... 329 Aktas, Mehmet S. \ TUBITAK (Turkish National Science Foundation), Turkey .................................. 1 Ali, A. K. Hairul Nizam Pengiran Haji \ Staffordshire University, UK ....................................... 1162 Al-Ibraheem, Nawaf \ KNET, Kuwait .............................................................................................. 752 Alizai, Fahd \ Victoria University, Australia .................................................................................... 487 Alonso, Juan Ignacio Guerrero \ University of Seville, Spain ........................................................ 472 Altintas, N. Ilker \ Cybersoft Information Technologies, Turkey ................................................... 1422 Anbuudayasankar, S. P. \ Amrita School of Engineering, India ................................................... 1537 Antlova, Klara \ Technical University, Czech Republic ................................................................. 1573 Argyropoulou, Maria \ Brunel University, UK .............................................................................. 1447 Arora, Hina \ Arizona State University, USA ................................................................................... 584 Asprey, Len \ Practical Information Management Solutions Pty Ltd, Australia ............................ 1470 Atkins, Anthony S. \ Staffordshire University, UK ......................................................................... 1162 Averweg, Udo \ Information Services, eThekwini Municipality & University of KwaZulu-Natal, South Africa . 706 Balasundaram, S.R. \ National Institute of Technology, Tiruchirappalli, India .............................. 279 Barik, Mridul Sankar \ Jadapur University, India .......................................................................... 154 Barroso, João \ Instituto Superior de Engenharia do Porto, Portugal .......................................... 1769 Batenburg, Ronald \ Utrecht University, The Netherlands ............................................................ 1349 Ben-Abdallah, Hanene \ Mir@cl Laboratory, Faculté des Sciences Economiques et de Gestion, Tunisia ......... 427 Boehm, Barry \ University of Southern California, USA ................................................................. 986 Bokma, Albert \ University of Sunderland, UK ................................................................................ 550 Bondarouk, Tanya \ University of Twente, The Netherlands ......................................................... 1379 Boonstra, Albert \ University of Groningen, The Netherlands ...................................................... 1480 Boudreau, Marie-Claude \ University of Georgia, USA ............................................................... 1496 Brahe, Steen \ Danske Bank, Denmark ............................................................................................. 835 Brainin, Esther \ Ruppin Academic Center, Israel ......................................................................... 1295 Brena, Ramón \ Tecnologico de Monterrey, Mexico ...................................................................... 1836 Bruno, Giorgio \ Politecnico di Torino, Italy ................................................................................. 1247 Brydon, Michael \ Simon Fraser University, Canada .................................................................... 1399 Buccella, Agustina \ Universidad Nacional del Comahue, Argentina ............................................. 207
Burbach, Ralf \ Institute of Technology Carlow, Ireland ............................................................... 1370 Burgard, Martin \ Saarland University, Germany ......................................................................... 1013 Burgess, Stephen \ Victoria University, Australia ............................................................................ 487 Bussler, Christoph \ Merced Systems Inc., USA ............................................................................... 128 Čančer, Vesna \ University of Maribor, Slovenia ............................................................................ 1871 Cechich, Alejandra \ Universidad Nacional del Comahue, Argentina ............................................ 207 Cetin, Semih \ Cybersoft Information Technologies, Turkey .......................................................... 1422 Chang, Cheng-Chun \ National Chung Cheng University, Taiwan ................................................. 452 Chang, Shuchih Ernest \ National Chung Hsing University, Taiwan ............................................. 714 Chen, Jinjun \ Swinburne University of Technology, Australia ..................................................... 1081 Chen, Ning \ Xi’an Polytechnic University, China ........................................................................... 544 Chen, Qiyang \ Montclair State University, USA ............................................................................. 533 Chen, Ruey-Shun \ China University of Technology, Taiwan ........................................................ 1605 Cheng, Chen-Yang \ Penn State University, USA .............................................................................. 86 Chia Cua, Francisco \ Otago Polytechnic, New Zealand ................................................................ 346 Chiu, Dickson K.W. \ Dickson Computer Systems, Hong Kong .................................................... 1902 Choudhury, Islam \ London Metropolitan University, UK ............................................................ 1646 Cua, Francisco Chia \ University of Otago, New Zealand ............................................................ 1663 Dan, Wang \ Harbin Institute of Technology, China ....................................................................... 1021 Daneva, Maya \ University of Twente, the Netherlands ................................................................. 1941 Dave, Sumita \ Shri Shankaracharya Institute of Management & Technology, India .................... 1183 Davis, Ashley \ University of Georgia, USA ..................................................................................... 776 de Carvalho Costa, Rogério Luís \ University of Coimbra, Portugal ............................................ 901 de Carvalho, Rogerio Atem \ Federal Center for Technological Education of Campos, Brazil ............. 99 de Cesare, Sergio \ Brunel University, UK ..................................................................................... 1646 Di Florido, Emily \ Brunel University, UK ..................................................................................... 1646 Díaz, Angel \ Instituto de Empresa Business School, Spain .............................................................. 648 Drias, Habiba \ University of Science and Technology, Algeria ...................................................... 329 Dundon, Tony \ National University of Ireland-Galway, Ireland ................................................... 1370 Dunn, Cheryl L. \ Grand Valley State University, USA .................................................................... 189 Eder, Johann \ University of Klagenfurt, Austria ............................................................................. 566 Eggert, Sandy \ University of Potsdam, Germany .................................................................. 946, 1265 Engbers, Sander \ COGAS BV. Business Unit Infra & Networkmanagement, The Netherlands ............................................................................................................................... 1379 Erwin, Geoff \ Cape Peninsula University of Technology, South Africa .......................................... 706 Faulkner, Stéphane \ University of Namur Rempart de la Vierge, Belgium .................................. 1141 Feki, Jamel \ Mir@cl Laboratory, Faculté des Sciences Economiques et de Gestion, Tunisia ......... 427 Ferran, Carlos \ The Pennsylvania State University, USA ............................................................. 1816 Foster, Steve \ University of Hertfordshire and NorthgateArinso, UK ............................................. 250 Furtado, Pedro \ University of Coimbra, Portugal .......................................................................... 901 Ganesh, K. \ Global Business Services – Global Delivery, IBM India Private Limited, India ........... 1537 Garito, Marco \ Digital Business, Italy ............................................................................................ 823 Garrett, Tony C. \ Korea University, Republic of Korea ........................................................ 346, 1663
Gerard, Gregory J. \ Florida State University, USA ....................................................................... 189 Gerritsen, Bart H. M. \ TNO Netherlands Organization for Applied Scientific Research, The Netherlands . 921 Ghanbary, Abbass \ MethodScience.com & University of Western Sydney, Australia .................... 668 Ghani, Shehzad Khalid \ Prince Sultan University, Saudi Arabia ................................................ 1960 Gibson, Candace J. \ University of Western Ontario,Canada ......................................................... 292 Goicoechea, Iñigo Monedero \ University of Seville, Spain ............................................................ 472 Gómez, Jorge Marx \ Carl von Ossietzky University Oldenburg, Germany ........................... 134, 508 González-Benito, Óscar \ University of Salamanca, Spain ........................................................... 1730 Grabski, Severin V. \ Michigan State University, USA .................................................................... 189 Green, Rolf \ OneView Pty Ltd, Australia ....................................................................................... 1470 Grieger, Martin \ Accenture, Germany ............................................................................................ 638 Gronau, Norbert \ University of Potsdam, Germany ............................................................. 946, 1265 Gulla, Jon Atle \ The Norwegian University of Science and Technology, Norway .......................... 866 Gunasekaran, Angappa \ University of Massachusetts—Dartmouth, USA ...................................... 21 Gurau, Calin \ GSCM – Montpellier Business School, France ...................................................... 1327 Hachaichi, Yasser \ Mir@cl Laboratory, Faculté des Sciences Economiques et de Gestion, Tunisia ......... 427 Hajnal, Ákos \ Computer and Automation Research Institute, Hungary ......................................... 972 Halgeri, Pritish \ Kansas State University, USA ............................................................................. 1121 Hartmann, Evi \ SMI Supply Management Institute, Germany ....................................................... 638 Hauc, Anton \ University of Maribor, Slovenia .............................................................................. 1871 Heemstra, Fred \ Open University Nederland and KWD Result management, The Netherlands ............................................................................................................................... 1847 Helms, Marilyn M. \ Dalton State College, USA ........................................................................... 1605 Hilpert, Ditmar \ Reutlingen University, Germany ........................................................................ 1924 Holmström, Jonny \ Umeå University, Sweden ............................................................................. 1496 Holsapple, Clyde W. \ University of Kentucky, USA ...................................................................... 1099 Hu, Haiyang \ Zhejiang Gongshang University, China .................................................................. 1902 Hu, Hua \ Zhejiang Gongshang University, China ......................................................................... 1902 Huang, Shi-Ming \ National Chung Cheng University, Taiwan ....................................................... 452 Hung, Patrick C. K. \ University of Ontario Institute of Technology, Canada .............................. 1902 Hwang, Mark I. \ Central Michigan University, USA .................................................................... 1657 Ignatiadis, Ioannis \ University of Bath, UK .................................................................................. 1209 Ingvaldsen, Jon Espen \ The Norwegian University of Science and Technology, Norway .............. 866 Ioannou, George \ Athens University of Economics and Business, Greece ................................... 1447 Janssens, Guy \ Open University Nederland, The Netherlands ..................................................... 1847 Jih, Wen-Jang (Kenny) \ Middle Tennessee State University, USA ............................................... 1605 Jiménez-Zarco, Ana Isabel \ Open University of Catalonia, Spain .............................................. 1730 Kabene, Stefane M. \ University of Western Ontario, Canada ........................................................ 292 Kelzenberg, Kai \ RWTH Aachen University, Germany ..................................................................... 68 Kerr, Don \ University of the Sunshine Coast, Australia ................................................................ 1748 Khan, Khaled M. \ Qatar University, Qatar .................................................................................. 1113 Kidd, Paul T. \ Cheshire Henbury, UK ............................................................................................. 314 Kifor, Tamás \ Computer and Automation Research Institute, Hungary .......................................... 972 Kimble, Chris \ Euromed Marseille École de Management, France ................................................. 35
King, Lisa \ University of Western Ontario, Canada ........................................................................ 292 Kolp, Manuel \ Université Catholique de Louvain Place des Doyens, Belguim ............................ 1141 Koopman, Gerwin \ Syntess Software, The Netherlands ............................................................... 1349 Kotzab, Herbert \ Copenhagen Business School, Denmark ............................................................ 638 Koufopoulos, Dimitrios N. \ Brunel University, UK ...................................................................... 1447 Koumpis, Adamantios \ ALTEC S.A., Greece ................................................................................ 1593 Krcmar, Helmut \ Technische Universität München, Germany ....................................................... 169 Krishnankutty, K. V. \ College of Engineering, Trivandrum, India .............................................. 1960 Krogstie, John \ Norwegian University of Science and Technology (NTNU), Norway, & SINTEF ICT, Norway .................................................................................................................. 731 Kumar, Kuldeep \ Florida International University, USA ............................................................. 1060 Kumta, Gita A. \ SVKM’s NMIMS University, School of Business Management, Mumbai, India ............. 112 Kusters, Rob \ Eindhoven University of Technology and Eindhoven University of Technology, The Netherlands ........................................................................................................................ 1847 Lal, Shikha \ Banaras Hindu University (BHU), India .................................................................. 1553 Lämmer, Anne \ sd&m AG, and University of Potsdam, Germany ........................................ 946, 1265 Lane, Jo Ann \ University of Southern California, USA .................................................................. 986 Lastres-Segret, José A. \ University of La Laguna, Spain ............................................................. 1341 Lee, Tzong-Ru \ National Chung Hsing University, Taiwan, ROC ................................................ 1537 Lei, Chang \ Harbin Institute of Technology, China ....................................................................... 1021 León de Mora, Carlos \ University of Seville, Spain ....................................................................... 472 Li, Shing-Han \ Tatung University, Taiwan ...................................................................................... 452 Li, Zhang \ Harbin Institute of Technology, China ......................................................................... 1021 Liang, Xiaoya \ Fudan University, China ......................................................................................... 617 Lin, Chad \ Curtin University of Technology, Australia ................................................................. 1030 Lin, Koong \ Tainan National University of the Arts, Taiwan ........................................................ 1030 Loonam, John \ Dublin City University, Ireland ............................................................................ 1631 Lorenzo, Oswaldo \ Instituto de Empresa Business School, Spain .................................................. 648 Lu, June \ University of Houston-Victoria, USA ............................................................................... 533 Lübke, Daniel \ Leibniz Universität Hannover, Germany ................................................................ 508 Lukácsy, Gergely \ Budapest University of Technology and Economics, Hungary ......................... 972 Mabert, Vincent A. \ Indiana University, USA ............................................................................... 1924 Makuch, Paul \ Institute for Information Systems at German Research Centre for Artificial Intelligence, Germany ................................................................................................................. 817 Martínez-Ruiz, María Pilar \ University of Castilla-La Mancha, Spain ...................................... 1730 Mathrani, Sanjay \ Massey University, New Zealand .............................................................. 53, 1233 Mazumdar, Chandan \ Jadavpur University, India ......................................................................... 154 McDonagh, Joe \ University of Dublin, Trinity College, Ireland ................................................... 1631 McGaughey, Ronald E. \ University of Central Arkansas, USA ........................................................ 21 McHaney, Roger \ Kansas State University, USA .......................................................................... 1121 McLaughlin, Stephen \ University of Glasgow, UK ...................................................................... 1513 Meza, Justin \ HP Labs, USA ........................................................................................................... 267 Middleton, Michael \ Queensland University of Technology, Australia ........................................ 1470 Millán, Rocío \ University of Seville, Spain ...................................................................................... 472 Millham, Richard C. \ Catholic University of Ghana, Ghana ......................................................... 181
Mishra, Alok \ Atilim University, Turkey .............................................................................. 1279, 1318 Modrák, Vladimír \ Technical University of Košice, Slovakia ........................................................ 625 Mohan, Ashutosh \ Banaras Hindu University (BHU), India ........................................................ 1553 Mohandas, K. \ Amrita School of Engineering, India .................................................................... 1537 Møller, Charles \ Aalborg University, Denmark ............................................................................. 1789 Motwani, Jaideep \ Grand Valley State University, USA ............................................................... 1447 Moynihan, Gary P. \ The University of Alabama, USA ................................................................... 235 Nair, Prashant R. \ Amrita University, Coimbatore, India ............................................................... 596 Nandhakumar, Joe \ University of Warwick, UK ........................................................................... 1209 Nicolescu, Valentin \ Technische Universität München, Germany ................................................... 169 Núñez-Gorrín, José M. \ University of La Laguna, Tenerife, Spain .............................................. 1341 Pahl, Claus \ Dublin City University, Ireland ................................................................................... 997 Paige, Richard \ University of York, UK ............................................................................................. 35 Papajorgji, Petraq \ Center for Applied Optimization, University of Florida, USA ........................ 687 Pardalos, Panos M. \ Center for Applied Optimization, University of Florida, USA ...................... 687 Parthasarathy, S. \ Thiagarajar College of Engineering, India .................................................... 1172 Pei, Z. J. \ Kansas State University, USA ........................................................................................ 1121 Perko, Igor \ University of Maribor, Slovenia ................................................................................ 1871 Petkov, Don \ Eastern Connecticut State University, USA ............................................................... 706 Piazza, Franca \ Saarland University, Germany ............................................................................ 1013 Prabhu, Vittal \ Penn State Unviersity, USA ...................................................................................... 86 Protogeros, Nikos \ University of Macedonia, Greece ................................................................... 1593 Put, Dariusz \ Cracow University of Economics, Poland ............................................................... 1039 Raghu, T.S. \ Arizona State University, USA .................................................................................... 584 Rahayu, Wenny \ La Trobe University, Australia ............................................................................ 879 Rahimifard, Aysin \ Loughborough University, UK ......................................................................... 370 Ramadoss, B. \ National Institute of Technology, Tiruchirappalli, India ......................................... 279 Ramamohanarao, Kotagiri \ University of Melbourne, Australia ................................................ 1081 Rashid, Mohammad A. \ Massey University, New Zealand .................................................... 53, 1233 Reimers, Kai \ RWTH Aachen University, Germany .......................................................................... 68 Ruël, Huub \ University of Twente, The Netherlands & American University of Beirut, Lebanon ............................................................................................................................. 752, 1715 Rusu, Laura Irina \ La Trobe University, Australia ......................................................................... 879 Salaka, Vamsi \ Penn State Unversity, USA ........................................................................................ 86 Salim, Ricardo \ Universidad Autónoma de Barcelona, Spain, & Cautus Networks Corp., Venezuela ... 1816 Sammon, David \ University College Cork, Ireland ...................................................................... 1738 Schoenherr, Tobias \ Michigan State University, USA ................................................................... 1924 Sedera, Darshana \ Queensland University of Technology, Australia ............................................. 958 Sengupta, Anirban \ Jadavpur University, India ............................................................................. 154 Shakir, Maha \ Zayed University, United Arab Emirates ............................................................... 1797 Sharpanskykh, Alexei \ Vrije Universiteit Amsterdam, The Netherlands ...................................... 1196 Sherringham, Keith \ IMS Corp, Australia ............................................................................. 795, 805 Shrivastava, Monica \ Shri Shankaracharya Institute of Management & Technology, India ........... 1183 Sindre, Guttorm \ Norwegian University of Science and Technology (NTNU), Norway ................ 731
Singla, Ashim Raj \ Indian Institute of Foreign Trade, New Delhi. India ..................................... 1617 Skytøen, Øyvind \ Norwegian University of Science and Technology (NTNU), Norway ................ 731 Soja, Piotr \ Cracow University of Economics, Poland .................................................................. 1039 Soni, Ashok K. \ Indiana University, USA ...................................................................................... 1924 Subramoniam, Suresh \ Prince Sultan University, Saudi Arabia .................................................. 1960 Sun, Chia-Ming \ National Yunlin University of Science & Technology, Taiwan .......................... 1605 Tabatabaie, Malihe \ University of York, UK ..................................................................................... 35 Taniar, David \ Monash University, Australia .................................................................................. 879 Tansel, Abdullah Uz \ Baruch College – CUNY, USA ................................................................... 1461 Targowski, Andrew \ Western Michigan University, USA ............................................................... 397 Tektonidis, Dimitrios \ ALTEC S.A., Greece .................................................................................... 550 ter Horst, Vincent \ Saxion Knowledge Center Innovation and Entrepreneurship,The Netherlands ............................................................................................................................... 1379 Tounsi, Mohamed \ Prince Sultan University, Saudi Arabia ......................................................... 1960 Trigo, Antonio \ Escola Superior de Tecnologia e Gestão de Oliveira do Hospital, Portugal ..... 1769 Triviño, Félix Biscarri \ University of Seville, Spain ....................................................................... 472 Triviño, Jesús Biscarri \ University of Seville, Spain ...................................................................... 472 Trojer, Thomas \ University of Innsbruck, Austria ......................................................................... 1902 Tufekci, Ozgur \ Cybersoft Information Technologies, Turkey ...................................................... 1422 Unhelkar, Bhuvan \ MethodScience.com & University of Western Sydney, Australia ...................................................................................................... 356, 522, 668, 795, 805 Unruh, Amy \ University of Melbourne Australia .......................................................................... 1081 Valerio, Gabriel \ Tecnologico de Monterrey, Mexico .................................................................... 1836 Varajão, João \ Centro Algoritmi and University of Trás-os-Montes e Alto Douro, Portugal ..... 1769 Varga, László Z. \ Computer and Automation Research Institute, Hungary .................................... 972 Varga, Mladen \ University of Zagreb, Croatia .............................................................................. 1695 Venkataramanan, M.A. \ Indiana University, USA ....................................................................... 1924 Veres, Csaba \ Norwegian University of Science and Technology (NTNU), Norway ....................... 731 Viehland, Dennis \ Massey University, New Zealand ............................................................... 53, 1233 Vining, Aidan R. \ Simon Fraser University, Canada .................................................................... 1399 Vinze, Ajay \ Arizona State University, USA .................................................................................... 584 Vrecko, Igor \ University of Maribor, Slovenia .............................................................................. 1871 Wagner, Thomas \ RWTH Aachen University, Germany ................................................................... 68 Wang, John \ Montclair State University, USA ................................................................................ 533 Wang, Mingzhong \ University of Melbourne, Australia ............................................................... 1081 Wang, Minhong \ The University of Hong Kong, Hong Kong ....................................................... 1060 Wautelet, Yves \ Université Catholique de Louvain Place des Doyens, Belguim .......................... 1141 Werth, Dirk \ Institute for Information Systems at German Research Centre for Artificial Intelligence, Germany ................................................................................................................. 817 Weston, Richard \ Loughborough University, UK ........................................................................... 370 Wheeler, Zachary B. \ SDDM Technology, USA .............................................................................. 217 Wiggisser, Karl \ University of Klagenfurt, Austria ......................................................................... 566 Wittges, Holger \ Technische Universität München, Germany ........................................................ 169 Wu, Jiming \ California State University-East Bay, USA .............................................................. 1099
Wu, Ming-Chien (Mindy) \ University of Western Sydney, Australia ............................................. 356 Yao, James \ Montclair State University, USA .................................................................................. 533 Yen, David C. \ Miami University, USA ............................................................................................ 452 Zhu, Qin \ HP Labs, USA .................................................................................................................. 267 Zhu, Yaoling \ Dublin City University, Ireland ................................................................................. 997 Zhuang, Yi \ Zhejiang Gongshang University, China .................................................................... 1902
Contents
Volume , Section I. Fundamental Concepts and Theories This section serves as the foundation for this exhaustive reference tool by addressing crucial theories essential to the understanding of enterprise information systems. Chapters found within these pages provide an excellent framework in which to position enterprise information systems within the field of information science and technology. Individual contributions provide overviews of the history of enterprise information systems, the impact of information systems on organizations, and overviews on various enterprise information system processes such as enterprise resource planning and decision support systems. Within this introductory section, the reader can learn and choose from a compendium of expert research on the elemental theories underscoring enterprise information systems. Chapter 1.1. Principles and Experiences: Designing and Building Enterprise Information Systems ................... Mehmet S. Aktas, TUBITAK (Turkish National Science Foundation), Turkey
1
Chapter 1.2. Evolution of Enterprise Resource Planning ..................................................................... 21 Ronald E. McGaughey, University of Central Arkansas, USA Angappa Gunasekaran, University of Massachusetts—Dartmouth, USA Chapter 1.3. Exploring Enterprise Information Systems ...................................................................... 35 Malihe Tabatabaie, University of York, UK Richard Paige, University of York, UK Chris Kimble, Euromed Marseille École de Management, France Chapter 1.4. Enterprise Systems in Small and Medium-Sized Enterprises .......................................... 53 Sanjay Mathrani, Massey University, New Zealand Mohammad A. Rashid, Massey University, New Zealand Dennis Viehland, Massey University, New Zealand
Chapter 1.5. A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies in Multinational Organizations............................................................................................. 68 Kai Kelzenberg, RWTH Aachen University, Germany Thomas Wagner, RWTH Aachen University, Germany Kai Reimers, RWTH Aachen University, Germany Chapter 1.6. Integrated Research and Training in Enterprise Information Systems ............................. 86 Chen-Yang Cheng, Penn State University, USA Vamsi Salaka, Penn State Unversity, USA Vittal Prabhu, Penn State Unviersity, USA Chapter 1.7. Free and Open Source Enterprise Resources Planning .................................................... 99 Rogerio Atem de Carvalho, Federal Center for Technological Education of Campos, Brazil Chapter 1.8. E-Government and ERP: Challenges and Strategies...................................................... 112 Gita A. Kumta, SVKM’s NMIMS University, School of Business Management, Mumbai, India Chapter 1.9. Enterprise Application Integration (EAI) ....................................................................... 128 Christoph Bussler, Merced Systems Inc., USA Chapter 1.10. Enterprise Tomography: An Efficient Approach for Semi-Automatic Localization of Integration Concepts in VBLAs ..................................................................................................... 134 Jan Aalmink, Carl von Ossietzky University Oldenburg, Germany Jorge Marx Gómez, Carl von Ossietzky University Oldenburg, Germany Chapter 1.11. Enterprise Information System Security: A Life-Cycle Approach ............................... 154 Chandan Mazumdar, Jadavpur University, India Mridul Sankar Barik, Jadapur University, India Anirban Sengupta, Jadavpur University, India Chapter 1.12. From ERP to Enterprise Service-Oriented Architecture .............................................. 169 Valentin Nicolescu, Technische Universität München, Germany Holger Wittges, Technische Universität München, Germany Helmut Krcmar, Technische Universität München, Germany Chapter 1.13. Data Reengineering of Legacy Systems....................................................................... 181 Richard C. Millham, Catholic University of Ghana, Ghana Chapter 1.14. Semantically Modeled Databases in Integrated Enterprise Information Systems ....... 189 Cheryl L. Dunn, Grand Valley State University, USA Gregory J. Gerard, Florida State University, USA Severin V. Grabski, Michigan State University, USA
Chapter 1.15. An Overview of Ontology-Driven Data Integration .................................................... 207 Agustina Buccella, Universidad Nacional del Comahue, Argentina Alejandra Cechich, Universidad Nacional del Comahue, Argentina Chapter 1.16. A Fundamental SOA Approach to Rebuilding Enterprise Architecture for a Local Government after a Disaster ............................................................................................................... 217 Zachary B. Wheeler, SDDM Technology, USA Chapter 1.17. An Overview of Executive Information Systems ......................................................... 235 Gary P. Moynihan, The University of Alabama, USA Chapter 1.18. Making Sense of e-HRM: Transformation, Technology and Power Relations ............ 250 Steve Foster, University of Hertfordshire and NorthgateArinso, UK Chapter 1.19. Mix, Match, Rediscovery: A Mashup Experiment of Knowledge Organization in an Enterprise Environment ................................................................................................................. 267 Justin Meza, HP Labs, USA Qin Zhu, HP Labs, USA Chapter 1.20. Testing Guidelines for Developing Quality EAI Projects ............................................ 279 S.R. Balasundaram, National Institute of Technology, Tiruchirappalli, India B. Ramadoss, National Institute of Technology, Tiruchirappalli, India Chapter 1.21. Technology and Human Resources Management in Health Care ................................ 292 Stefane M. Kabene, University of Western Ontario, Canada Lisa King, University of Western Ontario, Canada Candace J. Gibson, University of Western Ontario, Canada Section II. Development and Design Methodologies This section provides in-depth coverage of conceptual architectures, frameworks and methodologies related to the design and implementation of enterprise information systems. Throughout these contributions, research fundamentals in the discipline are presented and discussed. From broad examinations to specific discussions on particular frameworks and infrastructures, the research found within this section spans the discipline while also offering detailed, specific discussions. Basic designs, as well as abstract developments, are explained within these chapters, and frameworks for designing successful decision support systems, integrating new technologies, and developing and implementing efficient processes are included. Chapter 2.1. Enterprise Information Systems: Aligning and Integrating Strategy, Technology, Organization and People ..................................................................................................................... 314 Paul T. Kidd, Cheshire Henbury, UK Chapter 2.2. An Adaptive E-Commerce Architecture for Enterprise Information Exchange ............. 329 Youcef Aklouf, University of Science and Technology, Algeria Habiba Drias, University of Science and Technology, Algeria
Chapter 2.3. A Structured Approach to Developing a Business Case for New Enterprise Information Systems ........................................................................................................................... 346 Francisco Chia Cua, Otago Polytechnic, New Zealand Tony C. Garrett, Korea University, Republic of Korea Chapter 2.4. Extending Enterprise Architecture with Mobility .......................................................... 356 Ming-Chien (Mindy) Wu, University of Western Sydney, Australia Bhuvan Unhelkar, MethodScience.com & University of Western Sydney, Australia Chapter 2.5. Enterprise Modelling in Support of Organisation Design and Change .......................... 370 Joseph Ajaefobi, Loughborough University, UK Aysin Rahimifard, Loughborough University, UK Richard Weston, Loughborough University, UK Chapter 2.6. The Enterprise Systems Approach ................................................................................. 397 Andrew Targowski, Western Michigan University, USA Chapter 2.7. Designing Data Marts from XML and Relational Data Sources.................................... 427 Yasser Hachaichi, Mir@cl Laboratory, Faculté des Sciences Economiques et de Gestion, Tunisia Jamel Feki, Mir@cl Laboratory, Faculté des Sciences Economiques et de Gestion, Tunisia Hanene Ben-Abdallah, Mir@cl Laboratory, Faculté des Sciences Economiques et de Gestion, Tunisia Chapter 2.8. Migrating Legacy Systems to Web Services Architecture ............................................. 452 Shing-Han Li, Tatung University, Taiwan Shi-Ming Huang, National Chung Cheng University, Taiwan David C. Yen, Miami University, USA Cheng-Chun Chang, National Chung Cheng University, Taiwan Chapter 2.9. EIS for Consumers Classification and Support Decision Making in a Power Utility Database .. Juan Ignacio Guerrero Alonso, University of Seville, Spain Carlos León de Mora, University of Seville, Spain Félix Biscarri Triviño, University of Seville, Spain Iñigo Monedero Goicoechea, University of Seville, Spain Jesús Biscarri Triviño, University of Seville, Spain Rocío Millán, University of Seville, Spain
472
Chapter 2.10. An ERP Adoption Model for Midsize Businesses........................................................ 487 Fahd Alizai, Victoria University, Australia Stephen Burgess, Victoria University, Australia Chapter 2.11. Developing and Customizing Federated ERP Systems ................................................ 508 Daniel Lübke, Leibniz Universität Hannover, Germany Jorge Marx Gómez, University Oldenburg, Germany
Chapter 2.12. Creation of a Process Framework for Transitioning to a Mobile Enterprise ............... 522 Bhuvan Unhelkar, MethodScience.com & University of Western Sydney, Australia Chapter 2.13. Development and Design Methodologies in DWM ..................................................... 533 James Yao, Montclair State University, USA John Wang, Montclair State University, USA Qiyang Chen, Montclair State University, USA June Lu, University of Houston-Victoria, USA Chapter 2.14. Facilitating Design of Efficient Components by Bridging Gaps Between Data Model and Business Process via Analysis of Service Traits of Data .................................................. 544 Ning Chen, Xi’an Polytechnic University, China Chapter 2.15. The Utilization of Semantic Web for Integrating Enterprise Systems ......................... 550 Dimitrios Tektonidis, ALTEC S.A., Greece Albert Bokma, University of Sunderland, UK Section III. Tools and Technologies This section presents extensive coverage of the technology that informs and impacts enterprise information systems. These chapters provide an in-depth analysis of the use and development of innumerable devices and tools, while also providing insight into new and upcoming technologies, theories, and instruments that will soon be commonplace. Within these rigorously researched chapters, readers are presented with examples of the tools that facilitate and support the emergence and advancement of enterprise information systems. In addition, the successful implementation and resulting impact of these various tools and technologies are discussed within this collection of chapters. Chapter 3.1. Data Warehouse Maintenance, Evolution and Versioning ............................................. 566 Johann Eder, University of Klagenfurt, Austria Karl Wiggisser, University of Klagenfurt, Austria Chapter 3.2. Information Supply Chains: Restructuring Relationships, Chains, and Networks ........ 584 Hina Arora, Arizona State University, USA T.S. Raghu, Arizona State University, USA Ajay Vinze, Arizona State University, USA Chapter 3.3. Benefits of Information Technology Implementations for Supply Chain Management: An Explorative Study of Progressive Indian Companies ................................................................... 596 Prashant R. Nair, Amrita University, Coimbatore, India Chapter 3.4. Transforming Compensation Management Practices through Web-Based Enterprise Technologies ....... 617 Xiaoya Liang, Fudan University, China Chapter 3.5. Business Process Management as a Critical Success Factor in EIS Implementation ... Vladimír Modrák, Technical University of Košice, Slovakia
625
Chapter 3.6. E-Markets as Meta-Enterprise Information Systems ..................................................... 638 Martin Grieger, Accenture, Germany Evi Hartmann, SMI Supply Management Institute, Germany Herbert Kotzab, Copenhagen Business School, Denmark Chapter 3.7. Enterprise Systems as an Enabler of Fast-Paced Change: The Case of Global B2B Procurement in Ericsson ..................................................................................................................... 648 Oswaldo Lorenzo, Instituto de Empresa Business School, Spain Angel Díaz, Instituto de Empresa Business School, Spain
Volume ,, Chapter 3.8. Extending Enterprise Application Integration (EAI) with Mobile and Web Services Technologies ....... Abbass Ghanbary, MethodScience.com & University of Western Sydney, Australia Bhuvan Unhelkar, MethodScience.com & University of Western Sydney, Australia Chapter 3.9. Towards a Model-Centric Approach for Developing Enterprise Information Systems ............... Petraq Papajorgji, Center for Applied Optimization, University of Florida, USA Panos M. Pardalos, Center for Applied Optimization, University of Florida, USA
668
687
Chapter 3.10. Impact of Portal Technologies on Executive Information Systems ............................. 706 Udo Averweg, Information Services, eThekwini Municipality & University of KwaZulu-Natal, South Africa Geoff Erwin, Cape Peninsula University of Technology, South Africa Don Petkov, Eastern Connecticut State University, USA Chapter 3.11. A Voice-Enabled Pervasive Web System with Self-Optimization Capability for Supporting Enterprise Applications .................................................................................................... 714 Shuchih Ernest Chang, National Chung Hsing University, Taiwan Chapter 3.12. Achieving System and Business Interoperability by Semantic Web Services ............. 731 John Krogstie, Norwegian University of Science and Technology (NTNU), Norway, & SINTEF ICT, Norway Csaba Veres, Norwegian University of Science and Technology (NTNU), Norway Guttorm Sindre, Norwegian University of Science and Technology (NTNU), Norway Øyvind Skytøen, Norwegian University of Science and Technology (NTNU), Norway Chapter 3.13. In-House vs. Off-the-Shelf e-HRM Applications......................................................... 752 Nawaf Al-Ibraheem, KNET, Kuwait Huub Ruël, University of Twente, The Netherlands, & American University of Beirut, Lebanon
Chapter 3.14. Enterprise Resource Planning Under Open Source Software ...................................... 776 Ashley Davis, University of Georgia, USA Chapter 3.15. Real Time Decision Making and Mobile Technologies ............................................... 795 Keith Sherringham, IMS Corp, Australia Bhuvan Unhelkar, MethodScience.com & University of Western Sydney, Australia Chapter 3.16. Business Driven Enterprise Architecture and Applications to Support Mobile Business .............. Keith Sherringham, IMS Corp, Australia Bhuvan Unhelkar, MethodScience.com & University of Western Sydney, Australia
805
Chapter 3.17. Mobile Technologies Extending ERP Systems ............................................................ 817 Dirk Werth, Institute for Information Systems at German Research Centre for Artificial Intelligence, Germany Paul Makuch, Institute for Information Systems at German Research Centre for Artificial Intelligence, Germany Chapter 3.18. Convergence in Mobile Internet with Service Oriented Architecture and Its Value to Business .......... Marco Garito, Digital Business, Italy
823
Chapter 3.19. Enterprise Specific BPM Languages and Tools ........................................................... 835 Steen Brahe, Danske Bank, Denmark Chapter 3.20. Semantic Business Process Mining of SAP Transactions ............................................ 866 Jon Espen Ingvaldsen, The Norwegian University of Science and Technology, Norway Jon Atle Gulla, The Norwegian University of Science and Technology, Norway Chapter 3.21. Mining Association Rules from XML Documents....................................................... 879 Laura Irina Rusu, La Trobe University, Australia Wenny Rahayu, La Trobe University, Australia David Taniar, Monash University, Australia Section IV. Utilization and Application This section introduces and discusses the utilization and application of enterprise information systems around the world. These particular selections highlight, among other topics, enterprise information systems in multiple countries, data mining applications, and critical success factors of enterprise information systems implementation. Contributions included in this section provide excellent coverage of the impact of enterprise information systems on the fabric of our present-day global village. Chapter 4.1. QoS-Oriented Grid-Enabled Data Warehouses .............................................................. 901 Rogério Luís de Carvalho Costa, University of Coimbra, Portugal Pedro Furtado, University of Coimbra, Portugal
Chapter 4.2. EIS Systems and Quality Management .......................................................................... 921 Bart H. M. Gerritsen, TNO Netherlands Organization for Applied Scientific Research, The Netherlands Chapter 4.3. A Procedure Model for a SOA-Based Integration of Enterprise Systems...................... 946 Anne Lämmer, sd&m AG, Germany Sandy Eggert, University of Potsdam, Germany Norbert Gronau, University of Potsdam, Germany Chapter 4.4. Size Matters! Enterprise System Success in Medium and Large Organizations ........... 958 Darshana Sedera, Queensland University of Technology, Australia Chapter 4.5. Web Services as XML Data Sources in Enterprise Information Integration.................. 972 Ákos Hajnal, Computer and Automation Research Institute, Hungary Tamás Kifor, Computer and Automation Research Institute, Hungary Gergely Lukácsy, Budapest University of Technology and Economics, Hungary László Z. Varga, Computer and Automation Research Institute, Hungary Chapter 4.6. System-of-Systems Cost Estimation: Analysis of Lead System Integrator Engineering Activities ............. 986 Jo Ann Lane, University of Southern California, USA Barry Boehm, University of Southern California, USA Chapter 4.7. Consistency and Modularity in Mediated Service-Based Data Integration Solutions ............. Yaoling Zhu, Dublin City University, Ireland Claus Pahl, Dublin City University, Ireland
997
Chapter 4.8. Data Warehouse and Business Intelligence Systems in the Context of E-HRM.......... 1013 Martin Burgard, Saarland University, Germany Franca Piazza, Saarland University, Germany Chapter 4.9. Implementation of ERP in Human Resource Management ......................................... 1021 Zhang Li, Harbin Institute of Technology, China Wang Dan, Harbin Institute of Technology, China Chang Lei, Harbin Institute of Technology, China Chapter 4.10. A Study of Information Requirement Determination Process of an Executive Information System........................................................................................................................... 1030 Chad Lin, Curtin University of Technology, Australia Koong Lin, Tainan National University of the Arts, Taiwan Chapter 4.11. Towards Identifying the Most Important Attributes of ERP Implementations........... 1039 Piotr Soja, Cracow University of Economics, Poland Dariusz Put, Cracow University of Economics, Poland
Chapter 4.12. Challenges and Solutions for Complex Business Process Management.................... 1060 Minhong Wang, The University of Hong Kong, Hong Kong Kuldeep Kumar, Florida International University, USA Chapter 4.13. Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management ........................................................................................................................ 1081 Mingzhong Wang, University of Melbourne, Australia Jinjun Chen, Swinburne University of Technology, Australia Kotagiri Ramamohanarao, University of Melbourne, Australia Amy Unruh, University of Melbourne, Australia Chapter 4.14. A Resource-Based Perspective on Information Technology, Knowledge Management, and Firm Performance................................................................................................ 1099 Clyde W. Holsapple, University of Kentucky, USA Jiming Wu, California State University-East Bay, USA Chapter 4.15. A Decision Support System for Selecting Secure Web Services................................ 1113 Khaled M. Khan, Qatar University, Qatar Chapter 4.16. ERP Systems Supporting Lean Manufacturing in SMEs ........................................... 1121 Pritish Halgeri, Kansas State University, USA Roger McHaney, Kansas State University, USA Z. J. Pei, Kansas State University, USA Chapter 4.17. Specifying Software Models with Organizational Styles........................................... 1141 Manuel Kolp, Université Catholique de Louvain Place des Doyens, Belguim Yves Wautelet, Université Catholique de Louvain Place des Doyens, Belguim Stéphane Faulkner, University of Namur Rempart de la Vierge, Belgium Chapter 4.18. Mobile Strategy for E-Business Solution ................................................................... 1162 Anthony S. Atkins, Staffordshire University, UK A. K. Hairul Nizam Pengiran Haji Ali, Staffordshire University, UK Chapter 4.19. Application of Software Metrics in EPR Projects ...................................................... 1172 S. Parthasarathy, Thiagarajar College of Engineering, India Section V. Organizational and Social Implications This section includes a wide range of research pertaining to the social and organizational impact of enterprise information systems. Chapters included in this section analyze the impact of power relationships in system implementation, discusses how enterprise systems can be used to support internal marketing efforts, and demonstrate that perceived shared benefits, system characteristic, and the degree of knowledge of the system are significant influences on an individual’s willingness to use enterprise resource planning systems. The inquiries and methods presented in this section offer insight into the implications of enterprise information systems at both a personal and organizational level, while also emphasizing potential areas of study within the discipline.
Chapter 5.1. Optimization of Enterprise Information Systems through a ‘User Involvement Framework in Learning Organizations’ ............................................................................................ 1183 Sumita Dave, Shri Shankaracharya Institute of Management & Technology, India Monica Shrivastava, Shri Shankaracharya Institute of Management & Technology, India Chapter 5.2. Authority and Its Implementation in Enterprise Information Systems ........................ 1196 Alexei Sharpanskykh, Vrije Universiteit Amsterdam, The Netherlands Chapter 5.3. Enterprise Systems, Control and Drift ......................................................................... 1209 Ioannis Ignatiadis, University of Bath, UK Joe Nandhakumar, University of Warwick, UK Chapter 5.4. The Impact of Enterprise Systems on Business Value ................................................. 1233 Sanjay Mathrani, Massey University, New Zealand Mohammad A. Rashid, Massey University, New Zealand Dennis Viehland, Massey University, New Zealand Chapter 5.5. People-Oriented Enterprise Information Systems ........................................................ 1247 Giorgio Bruno, Politecnico di Torino, Italy Chapter 5.6. A SOA-Based Approach to Integrate Enterprise Systems............................................ 1265 Anne Lämmer, University of Potsdam, Germany Sandy Eggert, University of Potsdam, Germany Norbert Gronau, University of Potsdam, Germany Chapter 5.7. Achieving Business Benefits from ERP Systems ......................................................... 1279 Alok Mishra, Atilim University, Turkey Chapter 5.8. Experiences of Cultures in Global ERP Implementation ............................................. 1295 Esther Brainin, Ruppin Academic Center, Israel Chapter 5.9. Enterprise Resource Planning Systems: Effects and Strategic Perspectives in Organizations .... Alok Mishra, Atilim University, Turkey
1318
Volume ,,, Chapter 5.10. The Management of CRM Information Systems in Small B2B Service Organisations: A Comparison between French and British Firms .................................................... 1327 Calin Gurau, GSCM – Montpellier Business School, France Chapter 5.11. Information Technologies as a Vital Channel for an Internal E-Communication Strategy ............. José A. Lastres-Segret, University of La Laguna, Spain José M. Núñez-Gorrín, University of La Laguna, Tenerife, Spain
1341
Chapter 5.12. Early User Involvement and Participation in Employee Self-Service Application Deployment: Theory and Evidence from Four Dutch Governmental Cases .................................... 1349 Gerwin Koopman, Syntess Software, The Netherlands Ronald Batenburg, Utrecht University, The Netherlands Chapter 5.13. Assessing Information Technology Capability vs. Human Resource Information System Utilization............................................................................................................................. 1370 Ralf Burbach, Institute of Technology Carlow, Ireland Tony Dundon, National University of Ireland-Galway, Ireland Chapter 5.14. Exploring Perceptions about the Use of e-HRM Tools in Medium Sized Organizations .... Tanya Bondarouk, University of Twente, The Netherlands Vincent ter Horst, Saxion Knowledge Center Innovation and Entrepreneurship, The Netherlands Sander Engbers, COGAS BV. Business Unit Infra & Networkmanagement, The Netherlands
1379
Chapter 5.15. Adoption, Improvement, and Disruption: Predicting the Impact of Open Source Applications in Enterprise Software ................................................................................................. 1399 Michael Brydon, Simon Fraser University, Canada Aidan R. Vining, Simon Fraser University, Canada Section VI. Managerial Impact This section presents contemporary coverage of the managerial implications of enterprise information systems. Particular contributions explore relationships among information technology, knowledge management, and firm performance, while others discuss the evaluation, adoption, and technical infrastructure of enterprise information systems. The managerial research provided in this section allows administrators, practitioners, and researchers to gain a better sense of how enterprise information systems can inform their practices and behavior. Chapter 6.1. A Domain Specific Strategy for Complex Dynamic Processes .................................... 1422 Semih Cetin, Cybersoft Information Technologies, Turkey N. Ilker Altintas, Cybersoft Information Technologies, Turkey Ozgur Tufekci, Cybersoft Information Technologies, Turkey Chapter 6.2. Measuring the Impact of an ERP Project at SMEs: A Framework and Empirical Investigation...... Maria Argyropoulou, Brunel University, UK George Ioannou, Athens University of Economics and Business, Greece Dimitrios N. Koufopoulos, Brunel University, UK Jaideep Motwani, Grand Valley State University, USA
1447
Chapter 6.3. Managing Temporal Data ............................................................................................. 1461 Abdullah Uz Tansel, Baruch College, CUNY, USA
Chapter 6.4. Integrative Information Systems Architecture: Document & Content Management ...... Len Asprey, Practical Information Management Solutions Pty Ltd, Australia Rolf Green, OneView Pty Ltd, Australia Michael Middleton, Queensland University of Technology, Australia Chapter 6.5. Identifying and Managing Stakeholders in Enterprise Information System Projects .............. Albert Boonstra, University of Groningen, The Netherlands
1470
1480
Chapter 6.6. Understanding Information Technology Implementation Failure: An Interpretive Case Study of Information Technology Adoption in a Loosely Coupled Organization ................... 1496 Marie-Claude Boudreau, University of Georgia, USA Jonny Holmström, Umeå University, Sweden Chapter 6.7. Improving Supply Chain Performance through the Implementation of Process Related Knowledge Transfer Mechanisms ....................................................................................... 1513 Stephen McLaughlin, University of Glasgow, UK Chapter 6.8. Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem with Backhauls in Enterprise Information System of Service Industry ..................................................................... 1537 S. P. Anbuudayasankar, Amrita School of Engineering, India K. Ganesh, Global Business Services – Global Delivery, IBM India Private Limited, India K. Mohandas, Amrita School of Engineering, India Tzong-Ru Lee, National Chung Hsing University, Taiwan, ROC Chapter 6.9. Achieving Supply Chain Management (SCM): Customer Relationship Management (CRM) Synergy Through Information and Communication Technology (ICT) Infrastructure in Knowledge Economy.................................................................................................................... 1553 Ashutosh Mohan, Banaras Hindu University (BHU), India Shikha Lal, Banaras Hindu University (BHU), India Section VII. Critical Issues This section addresses conceptual and theoretical issues related to the field of enterprise information systems, which include issues related to customer relationship management, critical success factors, and business strategies. Within these chapters, the reader is presented with analysis of the most current and relevant conceptual inquires within this growing field of study. Particular chapters address the successes of enterprise resource planning through technology, and presents strategies for overcoming challenges related to enterprise system adoption. Overall, contributions within this section ask unique, often theoretical questions related to the study of enterprise information systems and, more often than not, conclude that solutions are both numerous and contradictory. Chapter 7.1. Preparedness of Small and Medium-Sized Enterprises to Use Information and Communication Technology as a Strategic Tool .............................................................................. 1573 Klara Antlova, Technical University, Czech Republic
Chapter 7.2. Doing Business on the Globalised Networked Economy: Technology and Business Challenges for Accounting Information Systems ............................................................................. 1593 Adamantios Koumpis, ALTEC S.A., Greece Nikos Protogeros, University of Macedonia, Greece Chapter 7.3. Factors Influencing Information System Flexibility: An Interpretive Flexibility Perspective ........ Ruey-Shun Chen, China University of Technology, Taiwan Chia-Ming Sun, National Yunlin University of Science & Technology, Taiwan Marilyn M. Helms, Dalton State College, USA Wen-Jang (Kenny) Jih, Middle Tennessee State University, USA Chapter 7.4. Challenges in Enterprise Information Systems Implementation: An Empirical Study ................. Ashim Raj Singla, Indian Institute of Foreign Trade, New Delhi, India
1605
1617
Chapter 7.5. A Grounded Theory Study of Enterprise Systems Implementation: Lessons Learned from the Irish Health Services .......................................................................................................... 1631 John Loonam, Dublin City University, Ireland Joe McDonagh, University of Dublin, Trinity College, Ireland Chapter 7.6. An Object-Oriented Abstraction Mechanism for Generic Enterprise Modeling .......... 1646 Islam Choudhury, London Metropolitan University, UK Sergio de Cesare, Brunel University, UK Emily Di Florido, Brunel University, UK Chapter 7.7. Integrating Enterprise Systems..................................................................................... 1657 Mark I. Hwang, Central Michigan University, USA Chapter 7.8. Analyzing Diffusion and Value Creation Dimensions of a Business Case of Replacing Enterprise Systems ............................................................................................................................ 1663 Francisco Chia Cua, University of Otago, New Zealand Tony C. Garrett, Korea University, Republic of Korea Chapter 7.9. Challenges of Data Management in Always-On Enterprise Information Systems ...... 1695 Mladen Varga, University of Zagreb, Croatia Chapter 7.10. Studying Human Resource Information Systems Implementation using Adaptive Structuration Theory: The Case of HRIS Implementation at Dow Chemical Company .................. 1715 Huub Ruël, University of Twente, The Netherlands, & American University of Beirut, Lebanon Chapter 7.11. Consequences and Strategic Implications of Networked Enterprise and Human Resources .......... Ana Isabel Jiménez-Zarco, Open University of Catalonia, Spain María Pilar Martínez-Ruiz, University of Castilla-La Mancha, Spain Óscar González-Benito, University of Salamanca, Spain
1730
Chapter 7.12. An Extended Model of Decision Making: A Devil’s Advocate Workshop ................ 1738 David Sammon, University College Cork, Ireland Chapter 7.13. Feral Systems and Other Factors Influencing the Success of Global ERP Implementations Don Kerr, University of the Sunshine Coast, Australia
1748
Section VIII. Emerging Trends This section highlights research potential within the field of enterprise information systems while exploring uncharted areas of study for the advancement of the discipline. Chapters within this section highlight new trends in adaptive information integration, as well as the challenges faced in crossorganizational enterprise resource planning projects. The contributions that conclude this exhaustive, multi-volume set provide emerging trends and suggestions for future research within this rapidly expanding discipline. Chapter 8.1. Motivations and Trends for IT/IS Adoption: Insights from Portuguese Companies......... 1769 João Varajão, Centro Algoritmi and University of Trás-os-Montes e Alto Douro, Portugal Antonio Trigo, Escola Superior de Tecnologia e Gestão de Oliveira do Hospital, Portugal João Barroso, Instituto Superior de Engenharia do Porto, Portugal Chapter 8.2. Next-Generation Enterprise Systems ........................................................................... 1789 Charles Møller, Aalborg University, Denmark Chapter 8.3. ERP Trends, Opportunities, and Challenges: A Focus on the Gulf Region in the Middle East ....... Maha Shakir, Zayed University, UAE
1797
Chapter 8.4. The Future of ERP and Enterprise Resource Management Systems ........................... 1816 Carlos Ferran, The Pennsylvania State University, USA Ricardo Salim, Universidad Autónoma de Barcelona, Spain, & Cautus Networks Corp., Venezuela Chapter 8.5. Next-Generation IT for Knowledge Distribution in Enterprises .................................. 1836 Ramón Brena, Tecnologico de Monterrey, Mexico Gabriel Valerio, Tecnologico de Monterrey, Mexico Jose-Luis Aguirre, Tecnologico de Monterrey, Mexico Chapter 8.6. Sizing ERP Implementation Projects: An Activity-Based Approach ........................... 1847 Guy Janssens, Open University Netherland, The Netherlands Rob Kusters, Open University Netherland, The Netherlands & Eindhoven University of Technology, The Netherlands Fred Heemstra, Open University Netherland, The Netherlands & KWD Result Management, The Netherlands
Chapter 8.7. Conducting Multi-Project Business Operations in SMEs and IS Support ................... 1871 Igor Vrecko, University of Maribor, Slovenia Anton Hauc, University of Maribor, Slovenia Vesna Čančer, University of Maribor, Slovenia Igor Perko, University of Maribor, Slovenia Chapter 8.8. Flow-Based Adaptive Information Integration ............................................................ 1902 Dickson K.W. Chiu, Dickson Computer Systems, Hong Kong Thomas Trojer, University of Innsbruck, Austria Hua Hu, Zhejiang Gongshang University, China Haiyang Hu, Zhejiang Gongshang University, China Yi Zhuang, Zhejiang Gongshang University, China Patrick C. K. Hung, University of Ontario Institute of Technology, Canada Chapter 8.9. Enterprise System in the German Manufacturing Mittelstand ..................................... 1924 Tobias Schoenherr, Michigan State University, USA Ditmar Hilpert, Reutlingen University, Germany Ashok K. Soni, Indiana University, USA M.A. Venkataramanan, Indiana University, USA Vincent A. Mabert, Indiana University, USA Chapter 8.10. Engineering the Coordination Requirements in Cross-Organizational ERP Projects: A Package of Good Practices ............................................................................................................ 1941 Maya Daneva, University of Twente, The Netherlands Chapter 8.11. ERP and Beyond......................................................................................................... 1960 Suresh Subramoniam, Prince Sultan University, Saudi Arabia Mohamed Tounsi, Prince Sultan University, Saudi Arabia Shehzad Khalid Ghani, Prince Sultan University, Saudi Arabia K. V. Krishnankutty, College of Engineering, Trivandrum, India
xxvii
Preface
The data collected by organizations is growing in volume and complexity. As such, businesses are abandoning traditional methods and relying more heavily on enterprise information systems to aid in the analysis and utilization of time-sensitive data and organizational knowledge. Enterprise information systems have gained in popularity and even SMEs, recognizing the competitive advantage afforded by real-time decision support, have begun to adopt the technologies. The growth in enterprise information system adoption makes it challenging for experts and practitioners to stay informed of the field’s most up-to-date research. That is why Business Science Reference is pleased to offer this three-volume reference collection that will empower students, researchers, and academicians with a strong understanding of critical issues within enterprise information systems by providing both broad and detailed perspectives on cutting-edge theories and developments. This reference is designed to act as a single reference source on conceptual, methodological, technical, and managerial issues, as well as provide insight into emerging trends and future opportunities within the discipline. Enterprise Information Systems: Concepts, Methodologies, Tools and Applications is organized into eight distinct sections that provide comprehensive coverage of important topics. The sections are: (1) Fundamental Concepts and Theories, (2) Development and Design Methodologies, (3) Tools and Technologies, (4) Utilization and Application, (5) Organizational and Social Implications, (6) Managerial Impact, (7) Critical Issues, and (8) Emerging Trends. The following paragraphs provide a summary of what to expect from this invaluable reference tool. Section 1, Fundamental Concepts and Theories, serves as a foundation for this extensive reference tool by addressing crucial theories essential to the understanding of enterprise information systems. Chapters such as Exploring Enterprise Information Systems by Malihe Tabatabaie, Richard Paige, and Chris Kimble, and Enterprise Systems in Small and Medium-Sized Enterprises by Sanjay Mathrani, Mohammad Rashid, and Dennis Viehland give an introduction and overview of enterprise information systems in a contemporary business environment. Free and Open Source Enterprise Resources Planning by Rogerio de Carvalho adds an important dimension to the under-researched subject of free/open source enterprise resource planning systems by comparing it to proprietary systems and highlighting its innovative potential. Additional selections, including Location-Based Service (LBS) System Analysis and Design by Yuni Xia, Jonathan Munson, and David Wood, and An Overview of Executive Information Systems by Gary Moynihan focus on providing backgrounds and introductions to specific concepts within enterprise information systems. These and several other foundational chapters provide a wealth of expert research on the elemental concepts and ideas surrounding enterprise information systems. Section 2, Development and Design Methodologies, presents in-depth coverage of the conceptual design and architecture of enterprise information systems, focusing on aspects including enterprise resource planning, service-oriented architecture, and decision support systems. Designing and implementing effective processes and strategies are the focus of such chapters as Development and Design Methodologies in DWM by James Yao, John Wang, Qiyang Chen, and June Lu, and Enterprise Model-
xxviii
ing in Support of Organisation Design and Change by Joseph Ajaefobi, Aysin Rahimifard, and Richard Weston. An ERP Adoption Model for Midsize Businesses by Fahd Alizai and Stephen Burgess offers a model that contains implementation processes, stages, factors, and issues associated with ERP adoption in midsize businesses. Youcef Aklouf and Habiba Drias’s An Adaptive E-Commerce Architecture for Enterprise Information Exchange presents architecture that allows partners to exchange information with other organizations without modifying their own systems. With contributions from leading international researchers, this section offers copious developmental approaches and design methodologies for enterprise information systems. Section 3, Tools and Technologies, presents extensive coverage of the various tools and technologies used in the development and implementation of enterprise information systems. This comprehensive section includes such chapters as Real Time Decision Making and Mobile Technologies, by Keith Sherringham and Bhuvan Unhelkar, and Mobile Technologies Extending ERP Systems by Dirk Werth and Paul Makuch, which describe various techniques and models for using mobile technology to support enterprise information systems. Extending Enterprise Application Integration (EAI) with Mobile and Web Services Technologies by Abbass Ghanbary and Bhuvan Unhelkar demonstrates how the technologies of web services open up the doors to collaborative enterprise architecture integration and service oriented architecture resulting in business integration. Finally, chapters such as Data Warehouse Maintenance, Evolution and Versioning by Johann Eder and Karl Wiggisser, and Hybrid Data Mining for Medical Applications by Syed Hassan and Brijesh Verma present tools to adapt to the challenges of various data warehousing mechanisms. In all, this section provides coverage of a variety of tools and technologies that inform and enhance modern enterprise information systems. Section 4, Utilization and Application, describes how enterprise information systems have been utilized and offers insight on important lessons for their continued use and evolution. Including chapters such as A Decision Support System for Selecting Secure Web Services by Khaled Khan, and EIS Systems and Quality Management by Bart Gerritsen, this section investigates numerous methodologies that have been proposed and enacted in enterprise information systems, as well as their results. As this section continues, a number of case studies in the use of enterprise information systems are presented such as Enterprise System in the German Manufacturing Mittelstand by Tobias Schoenherr, Ditmar Hilpert, Ashok Soni, M.A. Venkataramanan, and Vincent Mabert, and Size Matters! Enterprise System Success in Medium and Large Organizations by Darshana Sedera. Contributions found in this section provide comprehensive coverage of the practicality and current use of enterprise information systems. Section 5, Organizational and Social Implications, includes chapters discussing the organizational and social impact of enterprise information systems. People-Oriented Enterprise Information Systems by Giorgio Bruno proposes a notation system called People-Oriented Business Process Notation to solve the problem of effectively integrating conversation and business processes. Experiences of Cultures in Global ERP Implementation by Esther Brainin examines how cultural differences in global enterprises effect the implementation of enterprise resource planning systems. The Impact of Enterprise Systems on Business Value by Sanjay Mathrani, Mohammad Rashid, and Dennis Viehland explores two case studies that illustrate how enterprise information systems implementation can impact organizational functions. This section continues with Authority and Its Implementation in Enterprise Information Systems by Alexei Sharpanskykh, which discusses how power dynamics effect enterprise information systems integration and proposes a logic-based specification language for representing power relations. Overall, these chapters present a detailed investigation of the complex relationship between individuals, organizations and enterprise information systems. Section 6, Managerial Impact, presents focused coverage of enterprise information systems as it relates to improvements and considerations in the workplace. Managing Temporal Data by Abdullah Tansel addresses modeling and design issues related to temporal databases. Other chapters, such as
xxix
Integrative Information Systems Architecture by Len Asprey, discuss management considerations, document and web content management, and the technical infrastructure supporting these systems. In all, the chapters in this section offer specific perspectives on how managerial perspectives and developments in enterprise information systems inform each other to create more meaningful user experiences. Section 7, Critical Issues, addresses vital issues related to enterprise information systems, which include customer relationship management, critical success factors and the business strategies. Chapters such as The Feral Systems and Other Factors Influencing the Success of Global ERP Implementations by Don Kerr, and Challenges in Enterprise Information Systems Implementations by Ashim Singla discuss the success of enterprise information systems implementation based on technology, people, and processes. Additional selections, such as An Extended Model of Decision Making by David Sammon, Integrating Enterprise Systems by Mark Hwang, and Preparedness of Small and Medium-Sized Enterprises to Use Information and Communication Technology as a Strategic Tool by Klara Antlova address critical success factors in the deployment of enterprise information systems. Section 8, Emerging Trends, highlights areas for future research within the field of enterprise information systems, while exploring new avenues for the advancement of the discipline. Beginning this section is ERP Trends, Opportunities, and Challenges by Maha Shakir. This selection provides a richer understanding of key ERP issues through by discussing emerging industry trends. The evolution of enterprise resource planning is discussed in The Future of ERP and Enterprise Resource Management Systems by Carlos Ferran and Ricardo Salim, and Engineering the Coordination Requirements in Cross-organizational ERP Projects by Maya Daneva explores the complexity of cross-organizational enterprise resource planning implementation. These and several other emerging trends and suggestions for future research can be found within the final section of this exhaustive multi-volume set. Although the primary organization of the contents in this multi-volume work is based on its eight sections, offering a progression of coverage of the important concepts, methodologies, technologies, applications, social issues, and emerging trends, the reader can also identify specific contents by utilizing the extensive indexing system listed at the end of each volume. Furthermore to ensure that the scholar, researcher and educator have access to the entire contents of this multi volume set as well as additional coverage that could not be included in the print version of this publication, the publisher will provide unlimited multi-user electronic access to the online aggregated database of this collection for the life of the edition, free of charge when a library purchases a print copy. This aggregated database provides far more contents than what can be included in the print version in addition to continual updates. This unlimited access, coupled with the continuous updates to the database ensures that the most current research is accessible to knowledge seekers. As a comprehensive collection of research on the latest findings related to using technology to providing various services, Enterprise Information Systems: Concepts, Methodologies, Tools and Applications, provides researchers, administrators and all audiences with a complete understanding of the development of applications and concepts in enterprise information systems. Given the vast number of issues concerning usage, failure, success, policies, strategies, and applications of enterprise information systems in organizations, Enterprise Information Systems: Concepts, Methodologies, Tools and Applications addresses the demand for a resource that encompasses the most pertinent research in enterprise information systems development, deployment, and impact.
Section I
Fundamental Concepts and Theories This section serves as the foundation for this exhaustive reference tool by addressing crucial theories essential to the understanding of enterprise information systems. Chapters found within these pages provide an excellent framework in which to position enterprise information systems within the field of information science and technology. Individual contributions provide overviews of the history of enterprise information systems, the impact of information systems on organizations, and overviews on various enterprise information system processes such as enterprise resource planning and decision support systems. Within this introductory section, the reader can learn and choose from a compendium of expert research on the elemental theories underscoring enterprise information systems.
1
Chapter 1.1
Principles and Experiences: Designing and Building Enterprise Information Systems Mehmet S. Aktas TUBITAK (Turkish National Science Foundation), Turkey
AbstrAct The data requirements of e-business applications have been increased over the years. These applications present an environment for acquiring, processing, and sharing data among interested parties. To manage information in such data-intensive application domain, independent enterprise e-business applications have developed their own solutions to information services. However, these solutions are not interoperable with each other, target vastly different systems, and address diverse sets of requirements. They require greater interoperability to enable communication between different systems, so that they can share and utilize each other’s resources. To address these challenges, we discuss principles and experiences for designing and building of a novel enterprise information system. We introduce a novel architecture for a hybrid information service, which provides unification, federation, and interoperability of major Web-based information services. The hybrid information service is designed as an add-on information system, which interacts with DOI: 10.4018/978-1-60566-723-2.ch004
the local information services and assembles their metadata instances under one hybrid architecture. It integrates different information services using unification and federation concepts. In this chapter, we summarize the principles and experiences gained in designing and building the semantics, architecture, and implementation for the hybrid information service.
IntroductIon The data requirements of e-business applications have been increased over the years. These applications present an environment for acquiring, processing and sharing data among interested parties. In order to manage data in such data-intensive enterprise business application domain, Service Oriented Architecture (SOA) principles have gained great importance. A Service Oriented Architecture is simply a collection of services that are put together to achieve a common goal and that communicate with each other for either data passing or coordinating some activity. There is an emerging need for Web-based Enterprise Information Systems (EIS)
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Principles and Experiences
that manage all the information that may be associated with wide-scale SOA-based e-business applications. Over the years, independent enterprise ebusiness applications have developed their own customized implementations of Information Service Specifications. These EIS solutions are not interoperable with each other, target vastly different systems and address diverse sets of requirements (Zanikolas & Sakellariou, 2005). They require greater interoperability to enable communication between different systems, so that they can share and utilize each other’s resources. Furthermore, they do not provide uniform interfaces for publishing and discovery of information. In turn, this creates a limitation on the client-end (e.g. fat client-end applications), as the users have to interact with more than one EIS implementation. For example, large-scale e-business applications require management of large amounts of relatively slowly varying metadata. Another example, dynamic Web service collections, gathered together at any one time to perform an e-business operation, require greater support for dynamic metadata. Previous solutions do not address management requirements of both large-scale, static and small-scale, highly-dynamic metadata associated to Web Services (Zanikolas & Sakellariou, 2005). None of the existing solution enables communication between different e-business applications, so that they can share, utilize each other’s resources, have unified access interface and address diverse sets of application requirements (OGF GIN-CG). We therefore see this as an important area of investigation especially for enterprise e-business applications domain. This chapter introduces a Hybrid Service as a EIS that addresses metadata management requirements of both large-scale, static and small-scale, highly-dynamic metadata domains. The main novelty of this chapter is to describe the semantics, architecture, and implementation of a EIS integrating different Information Services by using unification and federation concepts. The implications of this
2
study are two-fold. First is to describe a generic Information Service architecture, which supports one-to-many information service implementations as local data sources and integrates different kinds of Web Service metadata at a higher conceptual level, while ignoring the implementation details of the local data-systems. Second is to describe the customized implementations two widely-used Web Service Specifications: the WS-I compatible Web Service Context (WS-Context) (Bunting, et al., 2003) and Universal Description, Discovery and Integration (UDDI) (Bellwood, Clement, & von Riegen, 2003) Specifications. The organization of this chapter is as follows. Section 2 reviews the relevant work. Section 3 gives an overview of the system. Section 4 presents the semantics of the Hybrid Service. Sections 5-6 present the architectural design details and the prototype implementation of the system. Finally, Section 7 contains a summary of the chapter.
relevAnt Work Unifying heterogeneous data sources under a single architecture has been target of many investigations (Ziegler & Dittrich, 2004). Information integration is mainly studied by distributed database systems research and investigates how to share data at a higher conceptual level, while ignoring the implementation details of the local data systems (Ozsu, 1999; Valduriez & Pacitti, 2004). Previous work on merger between the heterogeneous information systems may broadly be categorized as global-as-view and local-as-view integration (Florescu, Levy, & Mendelzon, 1998). In former category, data from several sources are transformed into a global schema and may be queried with a uniform query interface. Much work has been done on automating information federation process using global schema approach. In the latter category, queries are transformed into specialized queries over the local databases and integration is carried out by transforming queries.
Principles and Experiences
Although the global schema approach captures expressiveness capabilities of customized local schemas, it does not scale up to high number of data sources. In the local-as-view approach, each local-system’s schema is need to be mapped against each other to transform the queries. In turn, this leads to large number of mappings that need to be created and managed. The proposed system differs from local-asview approaches, as its query transformation happens between a unified schema and local schemas. It utilizes and leverages previous work on global-as-view approach for integrating heterogeneous local data services. The previous work mainly focuses on solutions that automates the information federation process at semantics level. Different from the previous work, the proposed approach presents a system architecture that enables information integration at application level. To our best knowledge, the proposed system is a pioneer work, as it describes an Information Service architecture that enables unification and federation of information coming from different metadata systems. One limitation is that it does not scale to high number of local data-systems due to low-level manual semantic schema integration. To facilitate testing of our system, we did the lowlevel information federation manually through a delicate analysis of the structure and semantic of each target schema. Since the main focus of our research is to explore the information integration at application level, we leave out the investigation of a low-level automated information federation capability for a future study. Locating services of interest in Web service intensive environments has recently become important, since the service oriented architecture based systems increased in numbers and gain popularity in recent years. The UDDI Specification is a widely used standard that enables services advertise themselves and discover other services. A number of studies extends and improves the out-of-box UDDI Specification (Open_GIS_Consortium_Inc.; Sycline; Galdos; Dialani, 2002;
ShaikhAli, Rana, Al-Ali, & Walker, 2003; Verma, Sivashanmugam, Sheth, Patil, Oundhakar, & Miller; GRIMOIRES). UDDI-M (Dialani, 2002) and UDDIe (ShaikhAli, Rana, Al-Ali, & Walker, 2003) projects introduced the idea of associating metadata and lifetime with UDDI Registry service descriptions, where retrieval relies on the matches of attribute name-value pairs between service description and service requests. METEOR-S (Verma, Sivashanmugam, Sheth, Patil, Oundhakar, & Miller) leveraged UDDI Specification by utilizing semantic web languages to describe service entries. Grimories (GRIMOIRES) extends the functionalities of UDDI to provide a semantic enabled registry designed and developed for the MyGrid project (MyGrid). It supports third-party attachment of metadata about services and represents all published metadata in the form of RDF triples either in a database, or in a file, or in a memory. Although out-of-box UDDI Specification is a widely used standard, it is limited to keyword-based query capabilities. Neither it allows metadata-oriented queries, nor it takes into account the volatile behavior of services. The previous work on UDDI-Extensions have centralized and database-based solutions. Thus, they present low fault-tolerance and low performance as opposed to decentralized and in-memory based data-systems. The UDDI Specification is not designed to coordinate activities of Web Services participating in a work-flow style applications. Thus, it does not support data-sharing and metadata management requirements of rich-interacting systems. The proposed system addresses the limitations of previous work by introducing an add-on architecture, which runs one layer above the implementations of UDDI and its extensions. It leverages previous work on UDDI and improves the quality of UDDI-based metadata-systems in terms of fault-tolerance and high-performance. To our best knowledge, this approach is unique, since it improves the qualities of existing implementations of Information Services without changing their code. Different from the previous
3
Principles and Experiences
UDDI-work, the proposed system supports datasharing and manages stateful interactions of Web Services. The proposed research also introduces semantics, communication protocols, and implementation of an extended UDDI version, which addresses metadata management requirements of aforementioned application use domains. Managing stateful interactions of Web Services is an important problem in rich-interacting systems. There are varying specifications focusing on point-to-point service communication, such as Web Service Resource Framework (WSRF), WS-Metadata Exchange (WS-ME). The WSRF specification, proposed by Globus alliance, IBM and HP, defines conventions for managing state, so that collaborating applications can discover, inspect, and interact with stateful resources in a standard way. The WS-ME provides a mechanism a) to share information about the capabilities of participating Web Services and b) to allow querying a WS Endpoint to retrieve metadata about what to know to interact with them. Communication among services is also achieved with a centralized metadata management strategy, the Web Services Context (WS-Context) Specification (Bunting, et al., 2003). The W-Context defines a simple mechanism to share and keep track of common information shared between multiple participants in Web service interactions. It is a lightweight storage mechanism, which allows the participant’s of an activity to propagate and share context information. Although point-to-point methodologies successfully manage the stateful information, they provide service conversation with metadata coming from the two services that exchange information. This is a limitation, since they become inefficient, when the number of communicating services increased. We find the WS-Context Specification as promising approach to tackle the problem of managing distributed session state, since it models a session metadata repository as an external entity, where more than two services can easily access/store highly dynamic, shared metadata. Even though it is promising, the WS-
4
Context has some limitations as described below. Firstly, the context service has limited functionalities such as the two primary operations: GetContext and SetContext. Secondly, the WS-Context Specification is only focused on defining stateful interactions of Web Services. It does not define a searchable repository for interaction-independent information associated to the services involved in an activity. Thirdly, the WS-Context Specification does not define a data model to manage stateful Web Service information. The proposed system differs from previous work focusing on point-to-point service communication, since it adopts a centralized metadata management strategy to regulate the interactions. It adopts the WS-Context Specification and presents an extended version of WS-Context Specification and its implementation. The prototype implementation manages dynamically generated session-related metadata. To our best knowledge, the proposed approach is unique, since none of the previous work on Information Services implemented the WS-Context Specification to manage stateful interactions of services. The proposed Hybrid Service leverages the extended WS-Context Service implementation and improves its quality in terms of fault-tolerance and high-performance.
HybrId servIce We designed and built a novel Hybrid Information Service Architecture called Hybrid Service, which provides unification, federation and interoperability of major Information Services. The Hybrid Service forms an add-on architecture that interacts with the local information services and unifies them in a higher-level hybrid system. In other words, it provides a unifying architecture, where one can assemble metadata instances of different information services. To facilitate testing of the system, we integrated the Hybrid Service with the two local information service implementations:
Principles and Experiences
WS-Context and Extended UDDI. We discuss semantics of the Hybrid Service in the following section followed by a section discussing its architecture.
semAntIcs The semantics of the system may be analyzed under four categories: extended UDDI, WS-Context, Unified Schema, and Hybrid Service Schema. The extended UDDI Specification extends existing out-of-box UDDI Specification to address its aforementioned limitations. The WS-Context Specification improves the existing out-of-box Web-Service Context Specification to meet the aforementioned application requirements. The Unified Schema Specification integrates these two information service specifications. The Hybrid Service Schema consists of two small schemas: Hybrid Schema and SpecMetadata Schema. These define the necessary abstract data models to achieve a generic architecture for unification and federation of different information service implementations.
the extended uddI specification We have designed extensions to the out-of-box UDDI Data Structure (Bellwood, Clement, & von Riegen, 2003) to be able to associate both prescriptive and descriptive metadata with service entries. This way the proposed system addresses limitations of UDDI (explained in Section 2) and interoperates with existing UDDI clients without requiring an excessive change in the implementations. We name our version of UDDI as the extended UDDI. The Extended UDDI Schema: The schema addresses the metadata requirements of Geographical Information System/Sensor applications by extending the out-of-box UDDI data model. It includes following additional/modified entities: a) service attribute entity (serviceAttribute) and
b) extended business service entity (businessService). We describe the additional/modified data model entities as follows. Business service entity structure: The UDDI’s business service entity structure contains descriptive, yet limited information about Web Services. A comprehensive description of the outof-box business service entity structure defined by UDDI can be found in (Bellwood, Clement, & von Riegen, 2003). Here, we only discuss the additional XML structures introduced to expand on existing business service entity. These additional XML elements are a) service attribute and b) lease. The service attribute XML element corresponds to a static metadata (e.g. WSDL of a given service). A lease structure describes a period of time during which a service can be discoverable. Service attribute entity structure: A service attribute (serviceAttribute) data structure describes information associated with service entities. Each service attribute corresponds to a piece of metadata, and it is simply expressed with (name, value) pairs. Apart from similar approaches (Dialani, 2002; ShaikhAli, Rana, Al-Ali, & Walker, 2003), in the proposed system, a service attribute includes a) a list of abstractAtttributeData, b) a categoryBag and c) a boundingBox XML structures: An abstractAttributeData element is used to represent metadata that is directly related with functionality of the service and store/maintain these domain specific auxiliary files as-is. This allows us to add third-party data models such as “capabilities.xml” metadata file describing the data coverage of domain-specific services such as the geospatial services. An abstractAttributeData can be in any representation format such as XML or RDF. This data structure allows us to pose domain-specific queries on the metadata catalog. Say, an abstractAttributeData of a geospatial service entry contains “capabilities.xml” metadata file. As it is in XML format, a client may conduct a find_service operation with an XPATH query statement to be carried out on the
5
Principles and Experiences
abstractAttributeData, i.e. “capabilities.xml”. In this case, the results will be the list of geospatial service entries that satisfy the domain-specific XPATH query. A categoryBag element is used to provide a custom classification scheme to categorize serviceAttribute elements. A simple classification could be whether the service attribute is prescriptive or descriptive. A boundingBox element is used to describe both temporal and spatial attributes of a given geographic feature. This way the system enables spatial query capabilities on the metadata catalog. Extended UDDI Schema XMLAPI: We present extensions/modifications to existing UDDI XML API set to standardize the additional capabilities of our implementation. These additional capabilities can be grouped under two XML API categories: Publish and Inquiry. The Publish XML API is used to publish metadata instances belonging to different entities of the extended UDDI Schema. It extends existing UDDI Publish XML API Set. It consists of the following functions: save service: Used to extend the out-of-box UDDI save service functionality. The save service API call adds/updates one or more Web Services into the service. Each service entity may contain one-to-many serviceAttribute and may have a lifetime (lease). save serviceAttribute: Used to register or update one or more semi-static metadata associated with a Web Service. delete service: Used to delete one or more service entity structures. delete serviceAttribute: Used to delete existing serviceAttribute elements from the service. The Inquiry XML API is used to pose inquiries and to retrieve metadata from the Extended UDDI Information Service. It extends existing UDDI Inquiry XML API set. It consists of the following functions: find service: Used to extend the out-of-box UDDI find service functionality. The find service API call locates specific services within the service. It takes additional input parameters such as serviceAttributeBag and Lease to facilitate the additional capabilities. find serviceAttribute: Used to find the aforementioned
6
serviceAttribute elements. The find serviceAttribute API call returns a list of serviceAttribute structure matching the conditions specified in the arguments. get serviceAttributeDetail: Used to retrieve semi-static metadata associated with a unique identifier. The get serviceAttributeDetail API call returns the serviceAttribute structure corresponding to each of the attributeKey values specified in the arguments. get serviceDetail: Used to retrieve service entity structure associated with a unique identifier. Using Extended UDDI Schema XML API: Given the capabilities of the Extended-UDDI Service, one can simply populate metadata instances using the Extended-UDDI XML API as in the following scenario. Say, a user publishes a new metadata to be attached to an already existing service in the system. In this case, the user constructs a serviceAttribute element. Based on aforementioned extended UDDI data model, each service entry is associated with one or more serviceAttribute XML elements. A serviceAttribute corresponds to a piece of interaction-independent metadata and it is simply expressed with (name, value) pair. We can illustrate a serviceAttribute as in the following example: ((throughput, 0.9)). A serviceAttribute can be associated with a lifetime and categorized based on custom classification schemes. A simple classification could be whether the serviceAttribute is prescriptive or descriptive. In the aforementioned example, the throughput service attribute can be classified as descriptive. In some cases, a serviceAttribute may correspond to a domain-specific metadata where service metadata could be directly related with functionality of the service. For instance, Open Geographical Concorcium compatible Geographical Information System services provide a “capabilities. xml” metadata file describing the data coverage of geospatial services. We use an abstractAttributeData element to represent such metadata and store/maintain these domain specific auxiliary files as-is. As the serviceAttribute is constructed, it can then be published to the Hybrid Service
Principles and Experiences
by using “save_serviceAttribute” operation of the extended UDDI XML API. On receiving a metadata publish request, the system extracts the instance of the serviceAttribute entity from the incoming requests, assigns a unique identifier to it and stores in in-memory storage. Once the publish operation is completed, a response is sent to the publishing client.
the Ws-context specification We have designed extensions and a data model for the WS-Context Specifications to tackle the problem of managing distributed session state. Unlike the point-to-point approaches, WS-Context models a third-party metadata repository as an external entity where more than two services can easily access/store highly dynamic, shared metadata. The extended WS-Context Schema: The schema is comprised of following entities: sessionEntity, sessionService and context. Session entity structure: A sessionEntity describes a period of time devoted to a specific activity, associated contexts, and serviceService involved in the activity. A sessionEntity can be considered as an information holder for the dynamically generated information. An instance of a sessionEntity is uniquely identified with a session key. A session key is generated by the system when an instance of the entity is published. If the session key is specified in a publication operation, the system updates the corresponding entry with the new information. When retrieving an instance of a session, a session key must be presented. A sessionEntity may have name and description associated with it. A name is a userdefined identifier and its uniqueness is up to the session publisher. A user-defined identifier is useful for the information providers to manage their own data. A description is optional textual information about a session. Each sessionEntity contains one-tomany context entity structures. The context entity
structure contains dynamic metadata associated to a Web Service or a session instance or both. Each sessionEntity is associated with its participant sessionServices. The sessionService entity structure is used as an information container for holding limited metadata about a Web Service participating to a session. A lease structure describes a period of time during which a sessionEntity or serviceService or a context entity instances can be discoverable. Session service entity structure: The sessionService entity contains descriptive, yet limited information about Web Services participating to a session. A service key identifies a sessionService entity. A sessionService may participate one or more sessions. There is no limit on the number of sessions in which a service can participate. These sessions are identified by session keys. Each sessionService has a name and description associated with it. This entity has an endpoint address field, which describes the endpoint address of the sessionService. Each sessionService may have one or more context entities associated to it. The lease structure identifies the lifetime of the sessionService under consideration. Context entity structure: A context entity describes dynamically generated metadata. An instance of a context entity is uniquely identified with a context key, which is generated by the system when an instance of the entity is published. If the context key is specified in a publication operation, the system updates the corresponding entry with the new information. When retrieving an instance of a context, a context key must be presented. A context is associated with a sessionEntity. The session key element uniquely identifies the sessionEntity that is an information container for the context under consideration. A context has also a service key, since it may also be associated with a sessionService participating a session. A context has a name associated with it. A name is a user-defined identifier and its uniqueness is up to the context publisher. The
7
Principles and Experiences
information providers manage their own data in the interaction-dependent context space by using this user-defined identifier. The context value can be in any representation format such as binary, XML or RDF. Each context has a lifetime. Thus, each context entity contains the aforementioned lease structure describing the period of time during which it can be discoverable. WS-Context Schema XML API: We present an XML API for the WS-Context Service. The XML API sets of the WS-Context XML Metadata Service can be grouped as Publish, Inquiry, Proprietary, and Security. The Publish XML API: The API is used to publish metadata instances belonging to different entities of the WS-Context Schema. It extends the WS-Context Specification Publication XML API set. It consists of the following functions: save session: Used to add/update one or more session entities into the hybrid service. Each session may contain one-to-many context entity, have a lifetime (lease), and be associated with service entries. save context: Used to add/update one or more context (dynamic metadata) entities into the service. save sessionService: Used to add/update one or more session service entities into the hybrid service. Each session service may contain one-to-many context entity and have a lifetime (lease). delete session: Used to delete one or more sessionEntity structures. delete context: Used to delete one or more contextEntity structures. delete sessionService: Used to delete one or more session service structures. The Inquiry XML API is used to pose inquiries and to retrieve metadata from service. It extends the existing WS-Context XML API. The extensions to the WS-Context Inquiry API set are outlined as follows: find session: Used to find sessionEntity elements. The find session API call returns a session list matching the conditions specified in the arguments. find context: Used to find contextEntity elements. The find context API call returns a context list matching the criteria specified in the arguments. find sessionService: Used to find session service entity elements. The
8
find sessionService API call returns a service list matching the criteria specified in the arguments. get sessionDetail: Used to retrieve sessionEntity data structure corresponding to each of the session key values specified in the arguments. get contextDetail: Used to retrieve the context structure corresponding to the context key values specified. get sessionServiceDetail: Used to retrieve sessionService entity data structure corresponding to each of the sessionService key values specified in the arguments. The Proprietary XML API is implemented to provide find/add/modify/delete operations on the publisher list, i.e., authorized users of the system. We adapt semantics for the proprietary XML API from existing UDDI Specifications. This XML API is as in the following: find publisher: Used to find publishers registered with the system matching the conditions specified in the arguments. get publisherDetail: Used to retrieve detailed information regarding one or more publishers with given publisherID(s). save publisher: Used to add or update information about a publisher. delete_publisher: Used to delete information about a publisher with a given publisherID from the metadata service. The Security XML API is used to enable authenticated access to the service. We adopt the semantics from existing UDDI Specifications. The Security API includes the following function calls. get_authToken: Used to request an authentication token as an ‘authInfo’ (authentication information) element from the service. The authInfo element allows the system implement access control. To this end, both the publication and inquiry API set include authentication information in their input arguments. discard_ authToken: Used to inform the hybrid service that an authentication token is no longer required and should be considered invalid. Using WS-Context Schema XMLAPI: Given the capabilities of the WS-Context Service, one can simply populate metadata instances using the WSContext XML API as in the following scenario. Say, a user publishes a metadata under an already created session. In this case, the user first constructs
Principles and Experiences
a context entity element. Here, a context entity is used to represent interaction-dependent, dynamic metadata associated with a session or a service or both. Each context entity has both system-defined and user-defined identifiers. The uniqueness of the system-defined identifier is ensured by the system itself, whereas, the user-defined identifier is simply used to enable users to manage their memory space in the context service. As an example, we can illustrate a context as in ((system-defined-uuid, user-defined-uuid, “Job completed”)). A context entity can be also associated with service entity and it has a lifetime. Contexts may be arranged in parent-child relationships. One can create a hierarchical session tree where each branch can be used as an information holder for contexts with similar characteristics. This enables the system to be queried for contexts associated to a session under consideration. This enables the system to track the associations between sessions. As the context elements are constructed, they can be published with save_context function of the WS-Context XML API. On receiving publishing metadata request, the system processes the request, extracts the context entity instance, assigns a unique identifier, stores in the in-memory storage and returns a respond back to the client.
the unified schema specification This research investigates a system architecture that would support information federation and unification at application-level. To facilitate testing of such system architecture, a unified schema is needed. We achieved semantic-level unification manually through a delicate analysis of the structure and semantics of the two schemas: extended UDDI and WS-Context. We introduced an abstract data model and query/publish XML API and named it as the Unified Schema Specification. We begin unification by finding the mappings between the similar entities of the two schemas. First mapping is between ExtendedUDDI.businessEntity and WS-Context.sessionEntity: The
businessEntity is used to aggregate one-to-many services and sites managed by the same people. The sessionEntity is used to aggregate session services participating to a session. Therefore, businessEntity (from ExtendedUDDI) can be considered as matching concepts with the sessionEntity (from WS-Context schema) as their intentional domains are similar. The cardinality between these entities differs, as the businessEntity may contain one to may sessionEntities. The second mapping is between: ExtendedUDDI.service and WS-Context. sessionService: These entities are equivalent as the intentional domains that they represent are the same. The cardinality between these entities is also the same. In the integrated schema, we unify these entities as service entity. The third mapping is between ExtendedUDDI.metadata and WS-Context.context: These entities are equivalent as the intentional domains that they represent are the same. The cardinality between these entities is also the same. We continue unification by merging the two schemas based on the mappings that we identified and create a unified schema. The Unified Schema unifies matching and disjoint entities of the two schemas. The Unified Schema is comprised of the following entities: businessEntity, sessionEntity, service, bindingTemplate, metadata, tModel, publisherAssertions. A businessEntity describes a party who publishes information about a session (in other words service activity), site or service. The publisherAssertions entity defines the relationship between the two businessEntities. The sessionEntity describes information about a service activity that takes place. A sessionEntity may contain one-to-many service and metadata entities. The service entity provides descriptive information about a Web Service family. It may contain one-to-many bindingTemplate entities that define the technical information about a service end-point. A bindingTemplate entity contains references to tModel that defines descriptions of specifications for service end-points. The service entity may also have one-to-many metadata at-
9
Principles and Experiences
tached to it. A metadata contains information about both interaction-dependent, interactionindependent metadata and service data associated to Web Services. A metadata entity describes the information pieces associated to services or sites or sessions as (name, value) pairs. The Unified Schema XML API: To facilitate testing of the federation capability, we introduce a limited Query/Publish XML API that can be carried out on the instances of the parts of the Unified Schema. We can group the Unified Schema XML API under two categories: Publish and Inquiry. The Publish XML API: This API is used to publish metadata instances belonging to different entities of the Unified Schema. It consists of the following functions: save business: Used to add/ update one or more business entities into the hybrid service. save session: Used to add/update one or more session entities into the hybrid service. Each session may contain one-to-many metadata, oneto-many service entities and have a lifetime (lease). save service: Used to add/update one or more service entries into the hybrid service. Each service entity may contain one-to-many metadata element and may have a lifetime (lease). save metadata: Used to register or update one or more metadata associated with a service. delete business: Used to delete one or more business entity structures. delete session: Used to delete one or more sessionEntity structures. delete service: Used to delete one or more service entity structures. delete metadata: Used to delete existing metadata elements from the hybrid service. The Inquiry XML API is used to pose inquiries and to retrieve metadata from service. It consists of following functions: find business: This API call locates specific businesses within the hybrid services. find session: Used to find sessionEntity elements. The find session API call returns a session list matching the conditions specified in the arguments. find service: Used to locate specific services within the hybrid service. find metadata: Used to find service entity elements. The find service API call returns a service list matching the criteria specified in the arguments.
10
get businessDetail: Used to retrieve businessEntity data structure of the Unified Schema corresponding to each of the business key values specified in the arguments. get sessionDetail: Used to retrieve sessionEntity data structure corresponding to each of the session key values specified in the arguments. get serviceDetail: Used to retrieve service entity data structure corresponding to each of the service key values specified in the arguments. get metadataDetail: Used to retrieve the metadata structure corresponding to the metadata key values specified. Using the Unified Schema XML API: Given these capabilities, one can simply populate the Hybrid Service with Unified Schema metadata instances using its XML API as in the following scenario. Say, a user wants to publish both sessionrelated and interaction-independent metadata associated to an existing service. In this case, the user constructs metadata entity instance. Each metadata entity has both system-defined and user-defined identifiers. The uniqueness of the system-defined identifier is ensured by the system itself, whereas, the user-defined identifier is simply used to enable users to manage their memory space in the context service. As an example, we can illustrate a context as in the following examples: a) ((throughput, 0.9)) and b) ((system-defined-uuid, user-defineduuid, “Job completed”)). A metadata entity can be also associated with site, or sessionEntity of the Unified Schema and it has a lifetime. As the metadata entity instances are constructed, they can be published with “save_metadata” function of the Unified Schema XML API. On receiving publishing metadata request, the system processes the request, extracts the metadata entity instance, assigns a unique identifier, stores in the in-memory storage and returns a respond back to the client.
the Hybrid service semantics The Hybrid Service introduces an abstraction layer for uniform access interface to be able to support one-to-many information service specification (such as WS-Context, Extended UDDI, or Unified
Principles and Experiences
Schema). To achieve the uniform access capability, the system presents two XML Schemas: a) Hybrid Schema and b) Specification Metadata (SpecMetadata) Schema. The Hybrid Schema defines the generic access interface to the Hybrid Service. The SpecMetadata Schema defines the necessary information required by the Hybrid Service to be able to process instances of supported information service schemas. We discuss the semantics of the Hybrid Schema and the SpecMetadata Schema in the following sections.
The Hybrid Schema The Hybrid Schema is designed to achieve a unifying access interface to the Hybrid Service. It is independent from any of the local information service schemas supported by the Hybrid Service. It defines a set of XML API to enable clients/providers to send specification-based publish/query requests (such as WS-Context’s “save_context” request) in a generic way to the system. The Hybrid Service XML API allows the system support one-to-many information service communication protocols. It consists of the following functions: hybrid_service: This XML API call is used to pose inquiry/publish requests based on any specification. With this function, the user can specify the type of the schema and the function. This function allows users to access an information service back-end directly. The user also specifies the specification-based publish/query request in XML format based on the specification under consideration. On receiving the hybrid_function request call, the system handles the request based on the schema and function specified in the query. save_schemaEntity: This API call is used to save an instance of any schema entities of a given specification. The save_schemaEntity API call is used to update/add one or more schema entity elements into the Hybrid Information Service. On receiving a save_schemaEntity publication request message, the system processes the incoming message based on information given in the
mapping file of the schema under consideration. Then, the system stores the newly-inserted schema entity instances into the in-memory storage. delete_schemaEntity: The delete_schemaEntity is used to delete an instance of any schema entities of a given specification. The delete_schemaEntity API call deletes existing service entities associated with the specified key(s) from the system. On receiving a schema entity deletion request message, the system processes the incoming message based on information given in the mapping file of the schema under consideration. Then the system, deletes the correct entity associated with the key. find_schemaEntity: This API call locates schemaEntities whose entity types are identified in the arguments. This function allows the user to locate a schema entity among the heterogeneous metadata space. On receiving a find_schemaEntity request message, the system processes the incoming message based on information given in the schema mapping file of the schema under consideration. Then the system, locates the correct entities matching the query under consideration. get_schemaEntity: The get_schemaEntityDetail is used to retrieve an instance of any schema entities of a given specification. It returns the entity structure corresponding to key(s) specified in the query. On receiving a get_schemaEntityDetail retrieval request message, the system processes the incoming message based on information given in the mapping file of the schema under consideration. Then the system retrieves the correct entity associated with the key. Finally, the system sends the result to the user. Given these capabilities, one can simply populate the Hybrid Service using the “save_schemaEntity” element and publish metadata instances of the customized implementations of information service specifications. The “save_schemaEntity” element includes an “authInfo” element, which describes the authentication information; “lease” element, which is used to identify the lifetime of the metadata instance; “schemaName” element, which is used to identify a specification schema
11
Principles and Experiences
(such as Extended UDDI Schema); “schemaFunctionName”, which is used to identify the function of the schema (such as “save_ serviceAttribute”); “schema_SAVERequestXML”, which is an abstract element used for passing the actual XML document of the specific publish function of a given specification. The Hybrid Service requires a specification metadata document that describes all necessary information to be able to process XML API of the schema under consideration. We discuss the specification metadata semantics in the following section.
The SpecMetadata Schema The SpecMetadata XML Schema is used to define all necessary information required for the Hybrid Service to support an implementation of information service specification. The Hybrid System requires an XML metadata document, which is generated based on the SpecMetadata Schema, for each information service specification supported by the system. The SpecMetadata XML file helps the Hybrid System to know how to process instances of a given specification XML API. The SpecMetadata includes Specname, Description, and Version XML elements. These elements define descriptive information to help the Hybrid Service to identify the local information service schema under consideration. The FunctionProperties XML element describes all required information regarding the functions that will be supported by the Hybrid Service. The FunctionProperties element consists of oneto-many FunctionProperty sub-elements. The FunctionProperty element consists of function name, memory-mapping and information-servicebackend mapping information. Here the memorymapping information element defines all necessary information to process an incoming request for in-memory storage access. The memory-mapping information element defines the name, userdefined identifier and system-defined identifier of an entity. The information-service-backend
12
information is needed to process the incoming request and execute the requested operation on the appropriate information service backend. This information defines the function name, its arguments, return values and the class, which needs to be executed in the information service back-end. The MappingRules XML element describes all required information regarding the mapping rules that provide mapping between the Unified Schema and the local information service schemas such as extended UDDI and WS-Context. The MappingRules element consists of one-to-many MappingRule sub-elements. Each MappingRule describes information about how to map a unified schema XML API to a local information service schema XML API. The MappingRule element contains the necessary information to identify functions that will be mapped to each other. Given these capabilities, one can simply populate the Hybrid Service as in the following scenario. Say, a user wants to publish a metadata into the Hybrid Service using WS-Context’s “save_context” operation through the generic access interface. In this case, firstly, the user constructs an instance of the “save_context” XML document (based on the WS-Context Specification) as if s/ he wants to publish a metadata instance into the WS-Context Service. Once the specificationbased publish function is constructed, it can be published into the Hybrid Service by utilizing the “save_schemaEntity” operation of the Hybrid Service Access API. As for the arguments of the “save_schemaEntity” function, the user needs to pass the following arguments: a) authentication information, b) lifetime information, c) schemaName as “WS-Context”, d) schemaFunctionName as “save_context” and e) the actual save_context document which was constructed based on the WS-Context Specification. Recall that, for each specification, the Hybrid Service requires a SpecMetadata XML document (an instance of the Specification Metadata Schema). On receipt of the “save_schemaEntity” publish operation, the
Principles and Experiences
Hybrid Service obtains the name of the schema (such as WS-Context) and the name of the publish operation (such as save_context) from the passing arguments. In this case, the Hybrid Service consults with the WS-Context SpecMetadata document and obtains necessary information about how to process incoming “save_context” operation. Based on the memory mapping information obtained from user-provided SpecMetadata file, the system processes the request, extracts the context metadata entity instance, assigns a unique identifier, stores in the in-memory storage and returns a response back to the client.
ArcHItecture We designed and built a Hybrid Information Service (Hybrid Service) to support handling and discovery of metadata associated with Web Services. The Hybrid Service is an add-on architecture that interacts with the local information systems and unifies them in a higher-level hybrid system. It provides a unifying architecture where one can assemble metadata instances of different information services.
Abstraction layers Figure 1 illustrates the detailed architectural design and abstraction layers of the system. (1) The Uniform Access layer imports the XML API of the supported Information Services. It is designed as generic as possible, so that it can support one-to-many XML API, as the new information services are integrated with the system. (2) The Request-processing layer is responsible for extracting incoming requests and process operations on the Hybrid Service. It is designed to support two capabilities: notification and access control. The notification capability enables the interested clients to be notified of the state changes happening in a metadata. It is implemented by utilizing publish-subscribe based paradigm. The access
control capability is responsible for enforcing controlled access to the Hybrid Information Service. The investigation and implementation of access control mechanism for the decentralized information service is left out for future study. (3) TupleSpaces Access API allows access to in-memory storage. This API supports all query/ publish operations that can take place on the Tuple Pool. (4) The Tuple Pool implements a lightweight implementation of JavaSpaces Specification (Sun_Microsystems, 1999) and is a generalized inmemory storage mechanism. It enables mutually exclusive access and associative lookup to shared data. (5) The Tuple Processor layer is designed to process metadata stored in the Tuple Pool. Once the metadata instances are stored in the Tuple Pool as tuple objects, the system starts processing the tuples and provides the following capabilities. The first capability is the LifeTime Management. Each metadata instance may have a lifetime defined by the user. If the metadata lifetime is exceeded, then it is evicted from the TupleSpace. The second capability is the Persistency Management. The system checks with the tuple space every so often for newly added /updated tuples and stores them into the database for persistency of information. The third capability is the Fault Tolerance Management. The system checks with the tuple space every so often for newly-added/updated tuples and replicates them in other Hybrid Service instances using the publish-subscribe messaging system. This capability also provides consistency among the replicated datasets. The fourth capability is the Dynamic Caching Management. With this capability, the system keeps track of the requests coming from the pub-sub system and replicates/ migrates tuples to other information services where the high demand is originated. (6) The Filtering layer supports the federation capability. This layer provides filtering between instances of the Unified Schema and local information service schemas such as WS-Context Schema based on the user defined mapping rules to provide transformations. (7) The Information Resource Manager layer is
13
Principles and Experiences
Figure 1. Client Client Extended WS-Context ClientHybrid …. UDDIExtended API API WS-Context API Hybrid …. UDDI WS API API WS API
Request UNIFORMprocessor ACCESS INTERFACE Request processor
Mapping Files (XML)
Access Control Access Control
Request processor
TUPLE SPACE ACCESS API Access Control TUPLE SPACE API
TUPLE SPACE POOL TUPLE POOL API SERVICE A HYBRIGTUPLE GRID INFORMATION MEMORY-IN STORAGE
Mapping Rule Files (XSLT)
Persistency Dynamic Dynamic Caching Fault Fault Tolerance Persistency Caching Tolerance Management Management Management Management Management Management TUPLE POOL Management ( JAVA SPACES) Management
TUPLE processor TUPLE processor Filter
Filter
Network PUB PUB-SUB – SUB Network Manager Manager
PublisherMANAGER Subscriber INFORMATION RESOURCE ManagerManager Resource Handler
DB1
Extended UDDI
Resource Handler
DB2
WS-Context
……
Extended UDDI
WS-Context
Service - I …
HYBRID …. GIS NETWORK CONNECTED….WITH PUBSUB SYSTEM
10 of 34
…..
responsible for managing low-level information service implementations. It provides decoupling between the Hybrid Service and sub-systems. (8) The Pub-Sub Network layer is responsible for communication between Hybrid Service instances.
distribution Figure 2 illustrates the distribution in Hybrid Service and shows N-node decentralized services from the perspective of a single service interacting with two clients. To achieve communication among the network nodes, the Hybrid Service utilizes a topic-based publish-subscribe software multicasting mechanism. This is a multi-publisher, multicast communication mechanism, which provides message-based communication. In the prototype implementation, we use an open source implementation of publish-subscribe paradigm (NaradaBrokering (Pallickara & Fox, 2003)) for message exchanges between peers.
14
Service - II
execution logic Flow The execution logic for the Hybrid Service happens as follows. Firstly, on receiving the client request, the request processor extracts the incoming request. The request processor processes the incoming request by checking it with the specification-mapping metadata (SpecMetadata) files. For each supported schema, there is a SpecMetadata file, which defines all the functions that can be executed on the instances of the schema under consideration. Each function defines the required information related with the schema entities to be represented in the Tuple Pool. (For example; entity name, entity identifier key, etc…). Based on this information, the request processor extracts the inquiry/publish request from the incoming message and executes these requests on the Tuple Pool. We apply the following strategy to process the incoming requests. First off all,
Principles and Experiences
Figure 2. Distributed hybrid services client
client
Wsdl
Wsdl HTTP(S)
Wsdl
Wsdl
HybrId service Subscriber Publisher
ext uddI
Ws-context
database
database
topic based Publish-subscribe messaging system
Replica Server-1
Wsdl
Wsdl
HybrId service Ext UDDI
WS-Context
Database
Database
...
HybrId service ...
Replica Server-2
the system keeps all locally available metadata keys in a table in the memory. On receipt of a request, the system first checks if the metadata is available in the memory by checking with the metadata-key table. If the requested metadata is not available in the local system, the request is forwarded to the Pub-Sub Manager layer to probe other Hybrid Services for the requested metadata. If the metadata is in the in-memory storage, then the request processor utilizes the Tuple Space Access API and executes the query in the Tuple Pool. In some cases, requests may require to be executed in the local information service backend. For an example, if the client’s query requires SQL query capabilities, it will be forwarded to the Information Resource Manager, which is responsible of managing the local information service implementations. Secondly, once the request is extracted and processed, the system presents abstraction layers for some capabilities such as access control and notification. First capability is the access control management. This capability layer is intended to provide access controlling for metadata accesses. As the focus of our investigation is distributed metadata management aspects of information services, we leave out the research and implementation of this capability as future study. The
Ext UDDI
WS-Context
Database
Database
...
Replica Server-N
second capability is the notification management. Here, the system informs the interested parties of the state changes happening in the metadata. This way the requested entities can keep track of information regarding a particular metadata instance. Thirdly, if the request is to be handled in the memory, the Tuple Space Access API is used to enable the access to the in-memory storage. This API allows us to perform operations on the Tuple Pool. The Tuple Pool is an in-memory storage. The Tuple Pool provides a storage capability where the metadata instances of different information service schemas can be represented. Fourthly, once the metadata instances are stored in the Tuple Pool as tuple objects, the tuple processor layer is being used to process tuples and provide a variety of capabilities. The first capability is the LifeTime Management. Each metadata instance may have a lifetime defined by the user. If the metadata lifetime is exceeded, then it is evicted from the Tuple Pool. The second capability is the Persistency Management. The system checks with the tuple space every so often for newly-added / updated tuples and stores them into the local information service back-end. The third capability is the Dynamic Caching Management. The system keeps track of the requests coming from the other Hybrid Service instances
15
Principles and Experiences
and replicates/migrates metadata to where the high demand is originated. The fourth capability is the Fault Tolerance Management. The system again checks with the tuple space every so often for newly-added / updated tuples and replicates them in other information services using the pub-sub system. This service is also responsible for providing consistency among the replicated datasets. As the main focus of this paper is to discuss information federation in Information Services, the detailed discussion on replication, distribution, consistency enforcement aspects of the system is left out as the focus of another paper (Aktas, Fox, & Pierce, 2008). The Hybrid Service supports a federation capability to address the problem of providing integrated access to heterogeneous metadata. To facilitate the testing of this capability, a Unified Schema is introduced by integrating different information service schemas. If the metadata is an instance of the Unified Schema, such metadata needs to be mapped into the appropriate local information service back-end. To achieve this, the Hybrid Service utilizes the filtering layer. This layer does filtering based on the user-defined mapping rules to provide transformations between the Unified Schema instances and local schema instances. If the metadata is an instance of a local schema, then the system does not apply any filtering, and backs-up this metadata to the corresponding local information service back-end. Fifthly, if the metadata is to be stored to the information service backend (for persistency of information), the Information Resource Management layer is used to provide connection with the back-end resource. The Information Resource Manager handles with the management of local information service implementations. It provides decoupling between the Hybrid Service and subsystems. With the implementation of Information Resource Manager, we have provided a uniform, single interface to sub-information systems. The Resource Handler implements the sub-information system functionalities. Each information service
16
implementation has a Resource Handler that enables interaction with the Hybrid Service. Sixthly, if the metadata is to be replicated/ stored into other Hybrid Service instances, the Pub-Sub Management Layer is used for managing interactions with the Pub-Sub network. On receiving the requests from the Tuple Processor, the Pub-Sub Manager publishes the request to the corresponding topics. The Pub-Sub Manager may also receive key-based access/storage requests from the pub-sub network. In this case, these requests will be carried out on the Tuple Pool by utilizing TupleSpace Access API. The Pub-Sub Manager utilizes publisher and subscriber subcomponents in order to provide communication among the instances of the Hybrid Services.
PrototyPe ImPlementAtIon The Hybrid Information Service prototype implementation consists of various modules such as Query and Publishing, Expeditor, Filter and Resource Manager, Sequencer, Access and Storage. This software is open source project and available at (Aktas). The Query and Publishing module is responsible for processing the incoming requests issued by end-users. The Expeditor module forms a generalized in-memory storage mechanism and provides a number of capabilities such as persistency of information. The Filter and Resource Manager modules provide decoupling between the Hybrid Information Service and the sub-systems. The Sequencer module is responsible for labeling each incoming context with a synchronized timestamp. Finally, the Access and Storage modules are responsible for actual communication between the distributed Hybrid Service nodes to support the functionalities of a replica hosting system. The Query and Publishing module is responsible for implementing a uniform access interface for the Hybrid Information Service. This module implements the Request Processing abstraction
Principles and Experiences
layer with access control and notification capabilities. On completing the request processing task, the Query and Publishing module utilizes the Tuple Space API to execute the request on the Tuple Pool. On completion of operation, the Query and Publication module sends the result to the client. As discussed earlier, context information may not be open to anyone, so there is a need for an information security mechanism. We leave out the investigation and implementation of this mechanism as a future study. We must note that to facilitate testing of the centralized Hybrid Service in various application use domains, we implemented a simple information security mechanism. Based on this implementation, the centralized Hybrid Service requires an authentication token to restrict who can perform inquiry/publish operation. The authorization token is obtained from the Hybrid Service at the beginning of client-server interaction. In this scenario, a client can only access the system if he/she is an authorized user by the system and his/her credentials match. If the client is authorized, he/she is granted with an authentication token which needs to be passed in the argument lists of publish/inquiry operations. The Query and Publishing module also implements a notification scheme. This is achieved by utilizing a publish-subscribe based messaging scheme. This enables users of Hybrid Service to utilize a pushbased information retrieval capability where the interested parties are notified of the state changes. This push-based approach reduces the server load caused by continuous information polling. We use the NaradaBrokering software (Pallickara & Fox, 2003) as the messaging infrastructure and its libraries to implement subscriber and publisher components. The Expeditor module implements the Tuple Spaces Access API, Tuple Pool and Tupleprocessing layer. The Tuple Spaces Access API provides an access interface on the Tuple Pool. The Tuple Pool is a generalized in-memory storage mechanism. Here, to meet the performance requirement of the proposed architecture, we built
an in-memory storage based on the TupleSpaces paradigm (Carriero & Gelernter, 1989). The Tuple-processing layer introduces a number of capabilities: LifeTime Management, Persistency Management, Dynamic Caching Management and Fault Tolerance Management. Here, the LifeTime Manager is responsible for evicting those tuples with expired leases. The Persistency Manager is responsible for backing-up newly-stored / updated metadata into the information service back-ends. The Fault Tolerance Manager is responsible for creating replicas of the newly added metadata. The Dynamic Caching Manager is responsible for replicating/migrating metadata under high demand onto replica servers where the demand originated. The Filtering module implements the filtering layer, which provides a mapping capability based on the user defined mapping rules. The Filtering module obtains the mapping rule information from the user-provided mapping rule files. As the mapping rule file, we use the XSL (stylesheet language for XML) Transformation (XSLT) file. The XSLT provides a general purpose XML transformation based on pre-defined mapping rules. Here, the mapping happens between the XML APIs of the Unified Schema and the local information service schemas (such as WS-Context or extended UDDI schemas). The Information Resource Manager module handles with management of local information service implementations such as the extended UDDI. The Resource Manager module separates the Hybrid System from the sub-system classes. It knows which sub-system classes are responsible for a request and what method needs to be executed by processing the specification-mapping metadata file that belongs the local information service under consideration. On receipt of a request, the Information Resource Manager checks with the corresponding mapping file and obtain information about the specification-implementation. Such information could be about a class (which needs to be executed), it’s function (which needs to be
17
Principles and Experiences
invoked), and function’s input and output types, so that the Information Resource Manager can delegate the handling of incoming request to appropriate sub-system. By using this approach, the Hybrid Service can support one-to-many information services as long as the sub-system implementation classes and the specification-mapping metadata (SpecMetadata) files are provided. The Resource Handler is an external component to the Hybrid Service. It is used to interact with sub-information systems. Each specification has a Resource Handler, which allows interaction with the database. The Hybrid System classes communicate with the sub-information systems by sending requests to the Information Resource Manager, which forwards the requests to the appropriate sub-system implementation. Although the sub-system object (from the corresponding Resource Handler) performs the actual work, the Information Resource Manager seems as if it is doing the work from the perspective of the Hybrid Service inner-classes. This approach separates the Hybrid Service implementation from the local schema-specific implementations. The Resource Manager module is also used for recovery purposes. We have provided a recovery process to support persistent in-memory storage capability. This type of failure may occur if the physical memory is wiped out when power fails or machine crashes. This recovery process converts the database data to in-memory storage data (from the last backup). It runs at the bootstrap of the Hybrid Service. This process utilizes user-provided “find_schemaEntity” XML documents to retrieve instances of schema entities from the information service backend. Each “find_schemaEntity” XML document is a wrapper for schema specific “find” operations. At the bootstrap of the system, firstly, the recovery process applies the schema-specific find functions on the information service backend and retrieves metadata instances of schema entities. Secondly, the recovery process stores these metadata instances into the in-memory storage to achive persistent in-memory storage.
18
In order to impose an order on updates, each context has to be time-stamped before it is stored or updated in the system. The responsibility of the Sequencer module is to assign a timestamp to each metadata, which will be stored into the Hybrid Service. To do this, the Sequencer module interacts with Network Time Protocol (NTP)-based time service (Bulut, Pallickara, & Fox, 2004) implemented by NaradaBrokering (Pallickara & Fox, 2003) software. This service achieves synchronized timestamps by synchronizing the machine clocks with atomic timeservers available across the globe.
conclusIon This chapter introduced the principles and experiences of designing and building a web-based Enterprise Information Service. Within this emphasis, it also introduced a novel architecture for an Enterprise Information Service, called Hybrid Service, supporting handling and discovery of not only quasi-static, stateless metadata, but also session related metadata. The Hybrid Service is an add-on architecture that runs one layer above existing information service implementations. Although, it mainly manages metadata that maybe associated to Web Services, it can also be used to manage any metadata about Web resources on the Internet. It provides unification, federation and interoperability of Enterprise Information Services. To achieve unification, the Hybrid Service is designed as a generic system with front and back-end abstraction layers supporting one-to-many local information systems and their communication protocols. To achieve federation, the Hybrid Service is designed to support information integration technique in which metadata from several heterogeneous sources are transferred into a global schema and queried with a uniform query interface. To manage both quasi-static and dynamic metadata and provide interoperability with wide-range of Web Service
Principles and Experiences
applications, the Hybrid Service is integrated with two local information services: WS-Context XML Metadata Service and Extended UDDI XML Metadata Service. The WS-Context Service is implemented based on WS-Context Specification to manage dynamic, session related metadata. The Extended UDDI Service is implemented based on an extended version of the UDDI Specification to manage semi-static, stateless metadata.
Dialani, V. (2002). UDDI-M version 1.0 API specification. Southampton, UK: University of Southampton.
reFerences
GRIMOIRES. (n.d.).UDDI compliant Web service registry with metadata annotation extension. Retrieved from http://sourceforge.net/projects/ grimoires
Aktas, M. S. (n.d.). Fault tolerant high performance information service-FTHPIS-hybrid WS-context service Web site. Retrieved from http://www. opengrids.org/wscontext Aktas, M. S., Fox, G. C., Pierce, M. E. (2008). Distributed high performance grid information service. Submitted to Journal of Systems and Software. Bellwood, T., Clement, L., & von Riegen, C. (2003). UDDI version 3.0.1: UDDI spec technical committee specification. Retrieved from http:// uddi.org/pubs/uddi-v3.0.1-20031014.htm Bulut, H., Pallickara, S., & Fox, G. (2004, June 16-18). Implementing a NTP-based time service within a distributed brokering system. In ACM International Conference on the Principles and Practice of Programming in Java, Las Vegas, NV. Bunting, B., Chapman, M., Hurley, O., Little, M., Mischinkinky, J., Newcomer, E., et al. (2003). Web services context (WS-context) version 1.0. Retrieved from http://www.arjuna.com/library/ specs/ws_caf_1-0/WS-CTX.pdf Carriero, N., & Gelernter, D. (1989). Linda in context. Communications of the ACM, 32(4), 444–458. doi:10.1145/63334.63337
Florescu, D., Levy, A., & Mendelzon, A. (1998). Database techniques for the World Wide Web: A survey. SIGMOD Record, 27(3), 59–74. doi:10.1145/290593.290605 Galdos. Galdos Inc. Retrieved from http://www. galdosinc.com
MyGrid. (n.d.). UK e-science project. Retrieved from http://www.mygrid.org.uk OGF Grid Interoperation Now Community Group (GIN-CG). (n.d.). Retrieved from https://forge. gridforum.org/projects/gin Open_GIS_Consortium_Inc. (2003). OWS1.2 UDDI experiment. OpenGIS interoperability program report OGC 03-028. Retrieved from http://www.opengeospatial.org/docs/03-028.pdf Ozsu, T. P. V. (1999). Principles of distributed database systems, 2nd edition. Prentice Hall. Pallickara, S., & Fox, G. (2003). NaradaBrokering: A distributed middleware framework and architecture for enabling durable peer-to-peer grids. In Proceedings of ACM/IFIP/USENIX International Middleware Conference Middleware-2003, Rio Janeiro, Brazil. Pallickara, S., & Fox, G. (2003). NaradaBrokering: A middleware framework and architecture for enabling durable peer-to-peer grids. (LNCS). Springer-Verlag.
19
Principles and Experiences
ShaikhAli. A., Rana, O., Al-Ali, R., & Walker, D. (2003). UDDIe: An extended registry for Web services. In Proceedings of the Service Oriented Computing: Models, Architectures, and Applications, Orlando, FL. SAINT-2003 IEEE Computer Society Press.
Verma, K., Sivashanmugam, K., Sheth, A., Patil, A., Oundhakar, S., & Miller, J. (n.d.). METEOR–S WSDI: A scalable P2P infrastructure of registries for semantic publication and discovery of Web services. Journal of Information Technology and Management.
Sun_Microsystems. (1999). JavaSpaces specification revision 1.0. Retrieved from http://www.sun. com/jini/specs/js.ps
Zanikolas, S., & Sakellariou, R. (2005). A taxonomy of grid monitoring systems. Future Generation Computer Systems, 21(1), 163–188. doi:10.1016/j.future.2004.07.002
Sycline. Sycline Inc. Retrieved from http://www. synclineinc.com Valduriez, P., & Pacitti, E. (2004). Data management in large-scale P2P systems. Int. Conf. on High Performance Computing for Computational Science (VecPar2004). ( [). Springer.]. LNCS, 3402, 109–122.
Ziegler, P., & Dittrich, K. (2004). Three decades of data integration-all problems solved? In WCC, 3-12.
This work was previously published in Always-On Enterprise Information Systems for Business Continuance: Technologies for Reliable and Scalable Operations, edited by Nijaz Bajgoric , pp. 58-77 , copyright 2010 by Information Science Reference (an imprint of IGI Global).
20
21
Chapter 1.2
Evolution of Enterprise Resource Planning Ronald E. McGaughey University of Central Arkansas, USA Angappa Gunasekaran University of Massachusetts—Dartmouth, USA
AbstrAct Business needs have driven the design, development, and use of the enterprise-wide information systems we call Enterprise Resource Planning (ERP) systems. Intra enterprise integration was a driving force in the design, development, and use of early ERP systems. Changing business needs have brought about the current business environment, wherein supply chain integration is desirable, if not essential, thus current and evolving ERP systems demonstrate an expanded scope of integration that encompasses limited inter-enterprise integration. This chapter explores the evolution, the current status, and future of ERP, with the objective of promoting relevant future research in this important area. If researchers hope to play a significant role in the design, development, and use of suitable ERP DOI: 10.4018/978-1-60566-146-9.ch002
systems to meet evolving business needs, then their research should focus at least in part on the changing business environment, its impact on business needs, and the requirements for enterprise systems that meet those needs.
IntroductIon Twenty years ago supplier relationship management was unique to the Japanese (those firms who embraced the JIT philosophy), China was still a slumbering economic giant, the Internet was largely for academics and scientists and certainly not a consideration in business strategy, the very idea of a network of businesses working together as a virtual enterprise was almost like science fiction, and hardly anyone had a cell phone. The world has changed. The cold war is over and economic war is on. We have moved rapidly toward an intensely competi-
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Evolution of Enterprise Resource Planning
tive, global economic environment. Countries like China and India are fast positioning themselves as key players and threatening the economic order that has existed for decades. Information Technology (IT) is more sophisticated than ever, yet we still struggle with how to best use it in business, and on a personal level as well. E-commerce (B2B, B2C, C2C, G2C, B2G) has become commonplace and M-commerce is not far behind, especially in Europe and Japan. In 2007, for the first time, there are more cell phones than tethered phones in the US, and increasingly sophisticated cell phones have capabilities that exceed the capabilities of older PCs. This is the backdrop against which we will discuss the evolving enterprise information system. At this point we will call it ERP, but is should become evident in the course of reading this manuscript that ERP is a label that may no longer be appropriate for evolving enterprise and inter-enterprise systems. In this chapter we define ERP and discuss the evolution of ERP, the current state of ERP and the future of ERP. We will emphasize how the evolution of ERP has been influenced by changing business needs and by evolving technology. We present a simple framework to explain that evolution. Some general directions for future research are indicated by our look at the past, present and particularly the future of ERP.
erP deFIned The ERP system is an information system that integrates business processes with the aim of creating value and reducing costs by making the right information available to the right people at the right time to help them make good decisions in managing resources productively and proactively. An ERP is comprised of multi-module application software packages that serve and support multiple business functions (Sane, 2005). These large automated cross functional systems were designed to bring about improved operational
22
efficiency and effectiveness through integrating, streamlining and improving fundamental back-office business processes. Traditional ERP systems were called back-office systems because they involved activities and processes in which the customer and general public were not typically involved, at least not directly. Functions supported by ERP typically included accounting, manufacturing, human resource management, purchasing, inventory management, inbound and outbound logistics, marketing, finance and to some extent engineering. The objective of traditional ERP systems in general was greater efficiency, and to a lesser extent effectiveness. Contemporary ERP systems have been designed to streamline and integrate operation processes and information flows within a company to promote synergy (Nikolopoulos, Metaxiotis, Lekatis and Assimakopoulos, 2003) and greater organizational effectiveness. These newer ERP systems have moved beyond the back-office to support front-office processes and activities like those fundamental to customer relationship management. The goal of most firms implementing ERP is to replace diverse functional systems with a single integrated system that does it all faster, better, and cheaper. Unfortunately, the “business and technology integration technology in a box” has not entirely met expectations (Koch, 2005). While there are some success stories, many companies devote significant resources to their ERP effort only to find the payoff disappointing (Dalal, Kamath, Kolarik and Sivaraman, 2003; Koch, 2005). Let us examine how we have come to this point in the ERP lifecycle.
tHe evolutIon oF erP The origin of ERP can be traced back to Materials Requirement Planning (MRP). While the concept of MRP was understood conceptually and discussed in the 1960s, it was not practical for commercial use. It was the availability of computing power (processing capability and stor-
Evolution of Enterprise Resource Planning
age capacity) that made commercial use of MRP possible and practical. While many early MRP systems were built in-house, often at great expense, MRP became one of the first popular off-the-shelf business applications (Orlicky, 1975). In essence, MRP involves taking inventory records, the master production schedule, and bills of materials and calculating time phased material, component and sub-assembly requirements (gross and net), planned orders, delivery dates, and more. Note the term calculating was used rather than forecasting. With a realistic MPS, accurate inventory records, lead times that are known and predictable, and current and correct bills of materials, it is possible to calculate material, component, and assembly requirements rather than forecast them. The shear volume of calculations necessary for MRP with multiple orders for even a few items made the use of computers essential. Initially, batch processing systems were used and regenerative MRP systems were the norm, where the plan would be updated periodically, often weekly. MRP employed a type of backward scheduling wherein lead times were used to work backwards from a due date to an order release date. While the primary objective of MRP was to compute material requirements, the MRP system proved also to be a useful scheduling tool. Order placement and order delivery were planned by the MRP system. Not only were orders for materials and components generated by an MRP system, but also production orders for manufacturing operations that used those materials and components to make higher level items like sub assemblies and finished products. As MRP systems became popular and more and more companies started using them, practitioners, vendors and researchers started to realize that the data and information produced by the MRP system in the course of material requirements planning and production scheduling could be augmented with additional data and used for other purposes. One of the earliest add-ons was the Capacity Requirements Planning module which could be used in developing capacity plans to produce the master
production schedule. Manpower planning and support for human resources management were incorporated into MRP. Distribution management capabilities were added. The enhanced MRP and its many modules provided data useful in the financial planning of manufacturing operations, thus financial planning capabilities were added. Business needs, primarily for operational efficiency, and to a lesser extent for greater effectiveness, and advancements in computer processing and storage technology brought about MRP and influenced its evolution. What started as an efficiency oriented tool for production and inventory management was becoming increasingly a cross functional system. A very important capability to evolve in MRP systems was the ability to close the loop (control loop). This was largely because of the development of real time (closed loop) MRP systems to replace regenerative MRP systems in response to changing business needs and improved computer technology—time-sharing was replacing batch processing as the dominant computer processing mode. With time-sharing mainframe systems the MRP system could run 24/7 and update continuously. Use of the corporate mainframe that performed other important computing task for the organization was not practical for some companies, because MRP consumed too many system resources; subsequently, some companies opted to use mainframes (now growing smaller and cheaper, but increasing in processing speed and storage capability) or mini-computers (could do more, faster than old mainframes) that could be dedicated to MRP. MRP could now respond (update relevant records) to timely data fed into the system and produced by the system. This closed the control loop with timely feedback for decision making by incorporating current data from the factory floor, warehouse, vendors, transportation companies, and other internal and external sources, thus giving the MRP system the capability to provide current (almost real-time) information for better planning and control. These closed loop systems better reflected the realities
23
Evolution of Enterprise Resource Planning
of the production floor, logistics, inventory, and more. It was this transformation of MRP into a planning and control tool for manufacturing by closing the loop, along with all the additional modules that did more than plan materials—they planned and controlled various manufacturer resources—that led to MRPII. Here too, improved computer technology and the evolving business needs for more accurate and timely information to support decision making and greater organizational effectiveness contributed to the evolution from MRP to MRPII. The MRP in MRPII stands for Manufacturing Resource Planning rather than materials requirements planning. The MRP system had evolved from a material requirements planning system into a planning and control system for resources in manufacturing operations—an enterprise information system for manufacturing. As time passed, MRPII systems became more widespread, and more sophisticated, particularly when used in manufacturing to support and complement computer integrated manufacturing (CIM). Databases started replacing traditional file systems allowing for better systems integration and greater query capabilities to support decision makers, and the telecommunications network became an integral part of these systems in order to support communications between and coordination among system components that were sometimes geographically distributed, but still within the company. In that context the label CIM II was used for a short time to describe early systems with capabilities now associated with ERP (Lope, 1992). The need for greater efficiency and effectiveness in back-office operations was not unique to manufacturing, but was also common to non-manufacturing operations. Companies in non-manufacturing sectors such as healthcare, financial services, air transportation, and the consumer goods sector (Chung and Snyder, 2000) started to use MRPII-like systems to manage critical resources, thus the M for manufacturing seemed not always to be appropriate. In the early 90s, these increasingly
24
sophisticated back-office systems were more appropriately labeled Enterprise Resource Planning systems (Nikolopoulos, Metaxiotis, Lekatis, and Assimakopoulos, 2003). MRP II was mostly for automating the business processes within an organization, but ERP, while primarily for support of internal processes, started to support processes that spanned enterprise boundaries (the extended enterprise). While ERP systems originated to serve the information needs of manufacturing companies, their domain was not just manufacturing anymore. Early ERP systems typically ran on mainframes like their predecessors, MRP and MRPII, but many migrated to client/server systems where networks were central and distributed databases more common. The growth of ERP and the migration to client/server systems really got a boost from the Y2K scare. Many companies were convinced of the need to replace older main-frame based systems, some ERP and some not, with the newer client/server architecture. After all, since they were going to have to make so many changes in the old systems to make them Y2K compliant and avoid serious problems—this was what vendors and consultants often told them to create FUD (fear, uncertainty and doubt)--they might as well bite the bullet and upgrade. Vendors and consultants benefited from the Y2K boost to ERP sales, as did some of their customers. Since Y2K, ERP systems have evolved rapidly, bringing us to the ERP systems of today. Present day ERP systems offer more and more capabilities and are becoming more affordable, even for SMEs (Dahlen and Elfsson, 1999).
erP todAy As ERP systems continue to evolve, vendors like PeopleSoft (Conway, 2001) and Oracle (Green, 2003) are moving to an Internet-based architecture, in large part because of the ever-increasing importance of E-commerce and the globalization of business (Abboud and Vara, 2007). Beyond
Evolution of Enterprise Resource Planning
that, perhaps the most salient trend in the continuing evolution of ERP is the focus on front-office applications and inter-organizational business processes, particularly in support of supply chain management (Scheer and Habermann, 2000; Al-Mashari, 2002). ERP is creeping out of the back-office into the front and beyond the enterprise to customers, suppliers and more, in order to meet changing business needs (Burns, 2007). Front-office applications involve interaction with external constituents like customers, suppliers, partners and more—hence the name front-office because they are visible to “outsiders.” Key players like Baal, Oracle, PeopleSoft, and SAP have incorporated Advanced Planning and Scheduling (APS), Sales Force Automation (SFA), Customer Relationship Management (CRM), Supply Chain Management (SCM), Business Intelligence, and E-commerce modules/capabilities into their systems, or repositioned their ERP systems as part of broader Enterprise Suites incorporating these and other modules/capabilities. ERP products reflect the evolving business needs of clients and the capabilities of IT, perhaps most notably those related to the Web. While some companies are expanding their ERP system capabilities (adding modules) and still calling them ERP systems, others have started to use catchy labels like enterprise suite, E-commerce suite, and enterprise solutions to describe their enterprise solution clusters that include ERP among other modules/capabilities. Table 1 lists the various modules/capabilities taken from the product descriptions of vendors like PeopleSoft, Oracle, J.D. Edwards, and SAP, who are major players in the ERP/enterprise systems market. Perhaps, most notable about ERP today is that it is much more than manufacturing resource planning. ERP and ERP-like systems have become popular with non-manufacturing operations like universities, hospitals, airlines and more, where back-office efficiency is important and so too is front-office efficiency and effectiveness (Chung and Snyder, 2000). In general, it is accurate to
Table 1. Enterprise system modules Modules Enterprise Resource Planning (ERP) Customer Relationship Management (CRM) Sales Management Field Service Management Retail Management Asset Management Financial Management Yield Management Business Collaboration Supplier Relationship Management (SRM) Inventory Management Order Processing Business Intelligence Data Warehouse Knowledge Management Analytics and Reporting Online Business Services User Services E-Commerce M-Commerce Facilities Management Maintenance Management Warehouse Management Logistics Management Distribution Management Project Management Human Resource Management
state that today’s ERP systems, or ERP-like systems, typically include modules/capabilities associated with front-office processes and activities. ERP capabilities are packaged with other modules that support front-office and back-office processes and activities, and nearly anything else that goes on within organizations. ERP proper (the back office system) has not become unimportant because back-office efficiency and effectiveness was, is, and will always be important. Today’s focus, however, seems more to be external as organizations look for ways to support and improve relationships and interactions with customers, suppliers, partners and other stakeholders (Knapen, 2007). While integration of internal functions is still important, and in many organizations still has not been achieved to a great extent, external
25
Evolution of Enterprise Resource Planning
integration is now receiving much attention. Progressive companies desire to do things--all things--faster, better and cheaper (to be agile), and they want systems and tools that will improve competitiveness, increase profits, and help them not just to survive, but to prosper in a dynamic global economy. Today, that means working with suppliers, customers and partners like never before. Vendors are using the latest technology to respond to these evolving business needs, as evidenced in the products and services they now offer or plan to offer their customers. Will ERP be the all-encompassing system comprised of the many modules and capabilities mentioned, or will it be relegated to the status of modules/functionality in all-encompassing systems for the future?
erP And tHe Future New, multi-enterprise business models like Value Collaboration Networks, customer-centric networks that coordinate all players in the supply chain, are evolving as we enter the 21st century (Nattkemper, 2000). These new business models reflect an increased business focus on external integration. While no one can really predict the future of ERP very far into the future, current management concerns and emphasis, vendor plans, and the changing business and technological environments, provide some clues about the future of ERP. We turn our attention now to evolving business needs and technological changes that should shape the future of ERP. E-Commerce is arguably one of the most important developments in business in the last 50 years (Jack Welch reportedly called it the “Viagra of business”), and M-Commerce is poised to takes it place along side or within the rapidly growing area of E-Commerce. Internet technology has made E-Commerce in its many forms (B2B, B2C, B2G, G2C, C2C, etc.) possible. Mobile and wireless technology are expected to make “always on” Internet and “anytime/anywhere” location based
26
services (also require global positioning systems) a reality, as well as a host of other capabilities we categorize as M-Business. One can expect to see ERP geared more to the support of both ECommerce and M-Commerce. Internet, mobile, and wireless technology should figure prominently in new and improved system modules and capabilities (Bhattacharjee, Greenbaum, Johnson, Martin, Reddy, Ryan, White and McKie, 2002; O’Brien, 2002; Sane, 2005). Vendors and their customers will find it necessary to make fairly broad, sweeping infrastructure changes to meet the demands of E-Commerce and M-Commerce (Bhattacharjee, Greenbaum, Johnson, Martin, Reddy, Ryan, White and McKie, 2002; Higgins, 2005). Movement away from client-server systems to Internet based architectures is likely. In fact, it has already started (Scheer and Habermann, 2000; Conway, 2001; Abboud and Vara, 2007). New systems will have to incorporate existing and evolving standards and older systems will have to be adapted to existing and evolving standards, and that may make the transition a little uncomfortable and expensive for vendors and their customers. Perhaps the biggest business challenge with E-Commerce, and even more so with M-Commerce, is understanding how to use these new and evolving capabilities to better serve the customer, work with suppliers and other business partners, and to be generally more efficient and effective. Businesses are just beginning to understand E-Commerce and how it can be used to meet changing business needs as well as how it changes them, and now M-Commerce poses a whole new challenge. E-Commerce and M-Commerce pose challenges for vendors and for their clients. Back-office and front-office processes and activities are being affected by ECommerce/E-Business and will most certainly be affected by M-Commerce/M-Business. The current business emphasis on intra and inter organizational process integration and external collaboration should remain a driving force in the evolution of ERP in the foreseeable future. Some businesses are attempting to transform
Evolution of Enterprise Resource Planning
themselves from traditional, vertically integrated organizations into multi-enterprise, “recombinant entities” reliant on core-competency-based strategies (Genovese, Bond, Zrimsek and Frey, 2001). Integrated SCM and business networks will receive great emphasis, reinforcing the importance of IT support for cross enterprise collaboration and inter-enterprise processes (Bhattacharjee, Greenbaum, Johnson, Martin, Reddy, Ryan, White and McKie, 2002; Al-Mashari, 2002). Collaborative commerce (C-commerce) has become not only a popular buzz word, but also a capability businesses desire/need. C-Commerce is label used to describe Internet-based (at least at present) electronic collaboration among businesses, typically supply chain partners, in support of inter-organizational processes that involve not necessarily transactions, but rather information sharing and coordination (Sane, 2005). ERP systems, or their successors will have to support the required interactions and processes among and within business entities, and work with other systems/modules that do the same. The back-office processes and activities of business network partners will not exist in a vacuum--many will overlap. There will be great need for business processes to span organizational boundaries (some do at present), possibly requiring a single shared inter-enterprise system that will do it (we might call it a distributed ERP), or at least ERP systems that can communicate with and co-process (share/divide processing tasks) with other ERP systems—probably the most practical solution, at least in the near future. Middleware, ASPs, and enterprise portal technologies may play an important role in the integration of such modules and systems (Bhattacharjee, Greenbaum, Johnson, Martin, Reddy, Ryan, White and McKie, 2002). Widespread adoption of a single ASP solution among supply chain partners may facilitate interoperability as all supply chain partners essentially use the same system. Could one ASP database serve all supply chain partners? Alternatively, a supply chain portal (vertical portal), jointly owned by supply chain
partners or a value added service provider that coordinates the entire supply chain, and powered by a single system serving all participants, could be the model for the future. Regardless of the means used to achieve greater external integration to complement internal integration, the new focus on supporting/facilitating inter-organizational processes will be important to the future of ERP like systems, whatever they are called! Solution providers and consultants will strive to enable companies to communicate and collaborate with other entities that comprise the extended enterprise (Bhattacharjee, Greenbaum, Johnson, Martin, Reddy, Ryan, White and McKie, 2002; Knapen, 2007). Internet based technologies will unquestionably be key in supporting crossenterprise ERP in the foreseeable future, and the ASP (also called SAAS—software as a service) model may just be the future of ERP in a business world focused on supply chain management (Burns, 2007). Note that ASP solutions are generally less costly, and consequently, they reduce one particular component of financial risk—the cost of product obsolescence. It is noteworthy also that ASP (SAAS) solutions are moving ERP within reach of SMEs, as it costs much less to “rent” than to “buy.” The lower cost of ASPs makes them more attractive to SMEs, increasingly targeted by vendors seeking new customers (Dahlen and Elfsson, 1999; Muscatello, Small and Chen, 2003, Burns, 2007). Some expect Web services to play a prominent role in the future of ERP (O’Brien, 2002; ACW Team, 2004; Abboud and Vara, 2007). Web Services range from simple to complex, and they can incorporate other Web services. The capability of Web Services to allow businesses to share data, applications, and processes across the Internet (O’Brien, 2002) may result in ERP systems of the future relying heavily on the Service Oriented Architecture (SOA), within which Web Services are created and stored, providing the building blocks for programs and systems. Web Service technology could put the focus where it belongs:
27
Evolution of Enterprise Resource Planning
on putting together the very best functional solution to automate a business process (Bhattacharjee, Greenbaum, Johnson, Martin, Reddy, Ryan, White and McKie, 2002). The use of “best in breed” Web Service-based solutions might be more palatable to businesses, since it might be easier and less risky to plug-in a new Web Service based solution than replace or add-on a new product module. A greater role for Web Services (the SOA) is expected, and that too would heighten the importance of an Internet Based Architecture to the future of ERP (Abboud and Vara, 2007; Burns, 2007). It should be noted that a Service Oriented Architecture is not incompatible with ASPs, and in fact it should make building, maintaining and providing ASP products/services more efficient and effective. All from one, or best in breed? Reliance on a single vendor would seem best from a vendor’s perspective, but it may not be best from the client’s standpoint. While it may be advantageous to have only one proprietary product to install and operate, and a single contact point for problems, there are risks inherent in this approach. Switching cost can be substantial, and if a single vendor does not offer a module/solution needed by the client, then the client must develop it internally, do without it, or purchase it from another vendor. The market is in fact changing in some ways because of this situation. The market was about what a product would do, but it is evolving such that now the focus is on what the product will do in the future (Maguire, 2006). Potential customers are “thinking ahead” and seem better to understand the future consequences of today’s system choice. Still, it is not uncommon for a client to be faced with trying to get diverse products to work together, and the problems of doing so are well documented. The single source approach means an organization must place great faith in the vendor, and with the consolidation and changes taking place among enterprise solution providers, that can be risky. Never the less, the “one source” alternative seems most popular at present (Burns, 2007).
28
So will it be single source, or best in breed? The best in breed approach will be good if greater interoperability/integration among vendor products is achieved (Bhattacharjee, Greenbaum, Johnson, Martin, Reddy, Ryan, White and McKie, 2002). There is a need for greater “out of the box” interoperability, thus a need for standards. Ideally, products will reach a level of standardization where software modules exhibit behavior similar to the plug and play hardware--you just plug in a new module, the system recognizes it, configures itself to accommodate the new module, and eureka, it works! While this is much to hope for, increased standardization brought about by developments like the Service Oriented Architecture, and XML-based XBRL, might make interoperability a reality, though probably not anytime soon. The fact that some are embracing standards for XML (Garbellotto, 2007) and more, does provide some reason for hope, but whether the future of ERP software trends toward the single source or best in breed approach remains to be seen. Regardless of the direction, integration technologies will be important in the new breed of modular, but linked enterprise applications. Data warehouses, data mining, and various analytic capabilities are needed in support of front office and back office processes and activities involved in CRM, SRM, SCM, Field Service Management, business collaboration and more. Likewise they are important in business intelligence (the process and BI systems) and strategic management. An important trend at present is merging ERP and CRM with BI (Burns, 2007). Data warehouses will play an important role in the future of ERP, because they will require data from ERP and provide data to support decision making in the context of ERP—ERP the process. Ideally, the Data warehouse would be integrated with all front-office, back-office and strategic systems to the extent that it helps close loops by providing accurate and timely data to support decision making in any context, in the form of on-line
Evolution of Enterprise Resource Planning
analytical processing. Knowledge management systems (KMS) endowed with neural networks and expert system capabilities should play a key role in decision making as they will be capable of capturing, modeling and automating decision making processes. Data warehouses and KMS should enable future ERP systems to support more automated business decision making (Bhattacharjee, Greenbaum, Johnson, Martin, Reddy, Ryan, White and McKie, 2002) and they should be helpful in the complex decision making needed in the context of fully integrated supply chain management. More automated decision making in both front-office and back-office systems, should eliminate/minimize human variability and error, greatly increase decision speed, and hopefully improve decision quality. Business Intelligence (BI) tools, which are experiencing a significant growth in popularity, take internal and external data and transform it into information used in building knowledge that helps decision makers to make more “informed” decisions—no pun intended. Current Business Intelligence (BI) tools are largely designed to support strategic planning and control but will likely trickle down to lower level decision makers, where their capabilities will be put to use in tactical and perhaps operational decision contexts. BI tools use data, typically from a data warehouse, along with data mining, analytic, statistical, query, reporting, forecasting and decision support capabilities to support managerial planning and control. The combined capabilities of the data warehouse, KMS and BI should contribute to faster, better, and less costly (in terms of time and effort involved) decisions at all organizational levels. They should be helpful in making decisions in the inter-organizational context of supply chain management, where complexity is increased by the need make decisions involving multiple supply chain partners. At least in the near future, it appears that greater emphasis will be placed on front-office systems, as opposed to back office systems, and sharing data, applications and processes across the
Internet (O’Brien, 2002). Back-office systems will not be unimportant, but they are more mature as a consequence of past emphasis and many work quite well. Emphasis will be on more thorough integration of the modules that comprise backoffice systems, integration of back-office systems with front-office and strategic systems, and integration of front-office, back-office and strategic systems with the systems of other organizations, especially within the context of the supply chain (Knapen, 2007). At present greater organizational effectiveness in managing the entire supply chain all the way to the end customer is a priority in business. The greater emphasis on front-office functions and cross enterprise communications and collaboration via the Internet simply reflects changing business needs and priorities. A 2004 ITtoolbox survey of ERP users in Europe, North America, Asia, India and elsewhere showed great interest in improved functionality and ease of integration and implementation (top motives for adding new modules or purchasing new ERP systems). Furthermore the same survey showed greatest interest in modules for CRM, Data Warehousing and SCM (top 3 on list). The demand for specific modules/capabilities in particular shows that businesses are looking beyond the enterprise. This external focus is encouraging vendors to seize the moment by responding with the modules/systems that meet evolving business needs. The need to focus not just on new front office tools but also on strategy will encourage greater vendor emphasis on tools like Data Warehouses and capabilities like Business Intelligence that support strategy development, implementation and control. A CAMagazine enterprise software survey showed that still in 2007, vendors remain focused on providing support for BI and strategic planning and control (Burns, 2007). The evolving environment of business suggests a direction for these comprehensive enterprise systems that would seem to make ERP an inappropriate label. The Garner group coined the term ERPII to describe their vision of the enterprise
29
Evolution of Enterprise Resource Planning
system of the future with increased focus on the front office, strategy, and the internet. ERPII was described as a business strategy and a set of collaborative operational and financial processes internally and beyond the enterprise (Zrimsek, 2002). The Gartner Group projected that by 2005 ERPII would replace ERP as the key enabler of internal and inter-enterprise efficiency (Zrimsek, 2002). While not there yet, the systems they described are evolving, but the ERPII label was lost! The name that seems to have stuck, at least for now, is Enterprise Information Systems, or just Enterprise Systems. Wikipedia (2007) describes Enterprise Information Systems as technology platforms that “enable organizations to integrate and coordinate their business processes. They provide a single system that is central to the organization and ensure that information can be shared across all functional levels and management hierarchies. Enterprise systems are invaluable in eliminating the problem of information fragmentation caused by multiple information systems in an organization, by creating a standard data structure.” While the ERP label may linger for a while, ERP will likely be relegated to module(s)/capability status, as a name more fitting for the evolving integrated, inter-enterprise, front office- back office, strategic systems will replace ERP, in much the same way that ERP replaced MRPII.
tHe erP evolutIon FrAmeWork This framework simply summarizes the evolution of ERP relating the stages in its evolution to business needs driving the evolution, as well as changes in technology. Table 2 presents the framework. As MRP evolved into MRPII, then ERP, and finally to ERPII/Enterprise Information System (present state of ERP), the scope of the system expanded as organizational needs changed, largely in response to the changing dynamics of the competitive environment. As business has become increasingly global in nature and cooperation among enterprises more
30
necessary for competitive reasons, systems have evolved to meet those needs. One can hardly ignore the technological changes that have taken place, because the current state of technology is a limiting factor in the design of systems to meet evolving business needs. From our examination of the evolution of ERP we would conclude that the next stage of the evolution will come about and be shaped by the same forces that have shaped each stage, that being evolving business needs and advances in technology. We expect ERP (the traditional back office system) to take its place along with MRP and MRPII. The functions of ERP will remain important and necessary, as have the functions of MRP and MRPII, but ERP will likely be absorbed by something bigger, and ERP itself will take its place as an integral part of the Internet based enterprise or inter-enterprise system of the future. Whether that all encompassing system is called Enterprise resource planning, Enterprise suite, Enterprise Information System, or by a label that currently resides in the back of some vendor employee or researchers mind, remains to be seen. One thing seems certain, the next stage in the evolution will hinge on the same forces shaping systems of the past—business need and technological change.
conclusIon ERP has evolved over a long period of time. MRP gave way to MRPII, then MRPII to ERP, ERP to ERPII, and in rather short order ERPII to Enterprise System/Enterprise Information System. It seems that Enterprise system is holding steady for now, but it too may give way to a new label that reflects the “inter-enterprise flavor” of future systems. MRP capabilities still exists as will ERP capabilities, but future systems must provide an increasingly broad set of capabilities and modules that support the back-office, frontoffice, strategic planning and control, as well as integrating processes and activities across diverse enterprises comprising supply chains and
Evolution of Enterprise Resource Planning
Table 2. The evolution of ERP System
Primary Business Need(s)
Scope
Enabling Technology
MRP
Efficiency
Inventory Management and Production planning and control.
Mainframe computers, batch processing, traditional file systems.
MRPII
Efficiency, Effectiveness and integration of manufacturing systems
Extending to the entire manufacturing firm (becoming cross functional).
Mainframes and Mini computers, realtime (time sharing) processing, database management systems (relational)
ERP
Efficiency (primarily back office), Effectiveness and integration of all organizational systems.
Entire organization (increasingly cross functional), both manufacturing and non-manufacturing operations
Mainframes, Mini and micro Computers, Client server networks with distributed processing and distributed databases, Data warehousing, and mining, knowledge management.
ERPII
Efficiency, effectiveness and integration within and among enterprises.
Entire organization extending to other organizations (cross functional and cross enterprise-partners, suppliers, customers, etc.)
Mainframes, Client Server systems, distributed computing, knowledge management, internet technology (includes intranets extranets, portals).
Inter-Enterprise Resource Planning, Enterprise Systems, Supply Chain Management, or whatever label gains common acceptance
Efficiency, effectiveness, coordination, and Integration within and among all relevant supply chain members as well as other partners, or stakeholders on a global scale.
Entire organization and its constituents (increasingly global and cross cultural) comprising global supply chain from beginning to end as well as other industry and government constituents
Internet, Service Oriented Architecture, Application Service Providers, wireless networking, mobile wireless, knowledge management, grid computing, artificial intelligence.
business networks. Whatever the name, current trends suggest certain characteristics which we can reasonably expect. This future system will have to support E-commerce and M-commerce, thus wireless technology, including mobile but not limited to mobile, and the internet will play a role in the evolving architecture. An Internet-based architecture now seems most likely, at least in the near future, and it may be a Service Oriented Architecture wherein Web Services are key. The increased emphasis on front-office systems and strategic planning and control will likely influence new capabilities introduced by vendors for the next few years. Increased automation of decision making is to be expected with contributions from Knowledge Management Systems, Data Warehouses/Data Marts, and Business Intelligence systems fueled by advancements in the field of artificial intelligence. Greater interoperability of diverse systems and more thorough integration within and between enterprise systems is likely
to remain a priority. An environment for business applications much like the “plug and play” environment for hardware would make it easier for organizations to integrate their own systems and have their systems integrated with other organizations’ systems. Such an environment awaits greater standardization. This ideal “plug and play” environment would make it easier for firms to opt for a “best in breed” strategy for application/module acquisition as opposed to reliance on a single vendor for a complete package of front-office, back-office and strategic systems. At present it seems that selecting a single vendor is preferred (Burns, 2007). Future developments in computer (hardware and software) and telecommunications technology will move us closer to effective interorganizational system integration and make fully integrated supply chain management a reality. Perhaps we might call the evolving system an Interprise Resource Planning System, Interprise Management System, or Interprise Information
31
Evolution of Enterprise Resource Planning
System to emphasize the inter-enterprise nature of these systems. Whatever they are called, it seems that what will be required of them goes far beyond what the enterprise resource planning (ERP) label would aptly describe, even with the “II” (ERPII) added! From the discussion of ERP’s future one can extrapolate certain desired capabilities of the interprise/enterprise system of the future. Following is a list of desired/required capabilities: • • •
• •
•
•
•
•
32
Supports interaction of supply chain partners and inter-organizational processes; A single corporate database to facilitate true functional system/module integration; At some point in time, possibly an interorganizational database to integrate supply chain partners--maybe supplied by an ASP or some other entity that supports the entire supply chain; Any necessary data transfer among/integration of modules is smooth and consistent; Possesses flexibility to continuously support agile companies responding to dynamic business environment; Employs a fluid yet robust architecture reflective of evolving enterprise models and evolving technology like mobile wireless; Utilizes database and data warehouse models/solutions to support transaction intensive applications (front office and back office), query intensive applications, OLAP, and any other necessary internal and/or external interaction with the database or data warehouse; Enterprise systems take into account partnering enterprise characteristics like culture, language, technology level, standards, information flows, and provide flexibility to adapt as partnering relationship changes; Solution vendors form global alliances with other vendors to better meet needs of clients in any country;
•
Solution vendors embrace standards like XML, the Service Oriented Architecture, and evolving wireless standards with due consideration to global business requirements;
For researchers and practitioners the advice is simple. The two primary drivers in the evolution from MRP, to MRPII, to ERP, to ERPII, to Enterprise Systems were business need and technological change. Technological change made possible the development of systems to meet changing business needs. The needs may exist for a while before the technology can help meet them, and the technology can exists for a while before someone recognizes its usefulness in meeting a current or evolving business need. In either case, the focus should be on monitoring business needs and monitoring technological change. Research that does both and is geared towards bringing the two together could make significant contributions to business. The ERP system of the future, whatever it may be called, will be found at the convergence of business need and technological change.
reFerences Abboud, L., & Vara, V. (2007, January 23). SAP Trails Nimble Start-Ups As Software Market Matures. Wall Street Journal, C1. Al-Mashari, M. (2002). Enterprise resource planning (ERP) systems: A research agenda. Industrial Management & Data Systems, 102(3), 165–170. doi:10.1108/02635570210421354 Arinze, B., & Anandarajan, M. (2003). A framework for using OO mapping methods to rapidly configure ERP systems. Association for Computing Machinery. Communications of the ACM, 46(2), 61. doi:10.1145/606272.606274
Evolution of Enterprise Resource Planning
Bhattacharjee, D., Greenbaum, J., Johnson, R., Martin, M., Reddy, R., & Ryan, H. L. (2002)... Intelligent Enterprise, 5(6), 28–33. Burns, M. (2007, September). Work in process: Enterprise software survey 2007. CAMagazine, 18-20. Chung, S., & Snyder, C. (2000). ERP adoption: A technological evolution approach. International Journal of Agile Management Systems, 2(1), 24–32. doi:10.1108/14654650010312570 Conway, C. (2001). Top 20 Visionaries. [VARbusiness: Manhassett.]. Comments of Craig Conway, 1724, 35. Dahlen, C., & Elfsson, J. (1999). An analysis of the current and future ERP market. Master’s Thesis, Industrial Economics and Management. The Royal Institute of Technology, Stockholm, Sweden. Dalal, N. P., Kamath, M., Kolarik, W. J., & Sivaraman, E. (2004). Toward an Integrated Framework for Modeling Enterprise Resources. Communications of the ACM, 47(3), 83–87. doi:10.1145/971617.971620 Davison, R. (2002). Cultural complications of ERP. Association for Computing Machinery. Communications of the ACM, 45(7), 109. doi:10.1145/514236.514267 Enterprise Information Systems. A definition from Wikipedia.com http://en.wikipedia.org/ wiki/Enterprise_information_systems, accessed 20 November 2007. Garbellotto, G. (2007, October). The Data Warehousing Disconnect. Strategic Finance, 59-61. Genovese, Y., Bond, B. A., Zrimsek, B., & Frey, N. (2001). The Transition to ERP II: Meeting the Challenges. http://www.gartner.com/ DisplayDocument?doc_dc=101237, accessed on 7 July 2005.
Green, J. (2003). Responding to the challenge. Canadian Transportation Logistics, 106(8), 20–21. Higgins, K. (2005, May 23). ERP Goes On The Road. Information Week, 1040, 52–53. ITtoolbox ERP Implementation Survey. (2004). Retrieved July 7, 2005, from http:// supplychain.ittoolbox.com/research/survey. asp?survey=corioerp_survey&p=2 Knapen, J. (2007, May 14). SAP Sees Growth Ahead. The Wall Street Journal Online, http://online.wsj.article_print/SB1179166926818022214. html accessed 11 November 2007. Koch, C. (2004). Koch’s IT Strategy: The ERP Pickle. Retrieved June 16, 2005, from http://www. cio.com/blog_view.html?CID=935 Kremers, M., & Dissel, H. V. (2000). ERP system migrations. Association for Computing Machinery. Communications of the ACM, 43(4), 52–56. doi:10.1145/332051.332072 Kumar, K., & Hillegersberg, J. V. (2000). ERP experiences and evolution. Association for Computing Machinery. Communications of the ACM, 43(4), 22–26. doi:10.1145/332051.332063 Lee, J., Siau, K., & Hong, S. (2003). Enterprise integration with ERP and EAI. Association for Computing Machinery. Communications of the ACM, 46(2), 54. doi:10.1145/606272.606273 Letzing, J. (2007, April 25). Big Rivals Move In on Salesforce.com’s Turf. The Wall Street Journal, B3G. Lope, P. F. (1992). CIMII: the integrated manufacturing enterprise. Industrial Engineering (American Institute of Industrial Engineers), 24, 43–45. Markus, M. L., Tanis, C., & van Fenema, P. C. (2000). Multisite ERP implementations. Association for Computing Machinery. Communications of the ACM, 43(4), 42–46. doi:10.1145/332051.332068
33
Evolution of Enterprise Resource Planning
Muscatello, J., Small, M., & Chen, I. (2003). Implementing enterprise resource planning (ERP) systems in small and midsize manufacturing firms. International Journal of Operations & Production Management, 23(8), 850–871. doi:10.1108/01443570310486329 Nattkemper, J. (2000). An ERP evolution. HP Professional, 14(8), 12–15. Nikolopoulos, K., Metaxiotis, K., Lekatis, N., & Assimakopoulos, V. (2003). Integrating industrial maintenance strategy into ERP. Industrial Management & Data Systems, 103(3/4), 184–192. doi:10.1108/02635570310465661 O’Brien, J. M. (2002). J.D. Edwards follows 5 with ERP upgrade. Computer Dealer News, 18(12), 11. Sane, V. (2005). Enterprise Resource Planning Overview. Ezine articles. Retrieved July 2, 2005, from http://ezinearticles.com/?EnterpriseResource-Planning-Overview&id=37656
Scheer, A.-W., & Habermann, F. (2000). Making ERP a Success. Association for Computing Machinery. Communications of the ACM, 43(4), 57–61. doi:10.1145/332051.332073 Soh, C., Kien, S. S., & Yap, J. T. (2000). Cultural fits and misfits: Is ERP a universal solution. Association for Computing Machinery. Communications of the ACM, 43(4), 47–51. doi:10.1145/332051.332070 ACW Team (2004, August 23). SSA Global releases converged ERP with manufacturing capabilities. Asia Computer Weekly, 1. Willcocks, L. P., & Stykes, R. (2000). The role of the CIO and IT function in ERP. Association for Computing Machinery. Communications of the ACM, 43(4), 32–38. doi:10.1145/332051.332065 Zrimsek, B. (2002). ERPII: The Boxed Set. Retrieved July 7, 2005, from http://www.gartner. com/pages/story.php.id.2376.s.8.jsp
This work was previously published in Global Implications of Modern Enterprise Information Systems: Technologies and Applications, edited by Angappa Gunasekaran, pp. 17-31, copyright 2009 by Information Science Reference (an imprint of IGI Global).
34
35
Chapter 1.3
Exploring Enterprise Information Systems Malihe Tabatabaie University of York, UK Richard Paige University of York, UK Chris Kimble Euromed Marseille École de Management, France
AbstrAct The concept of an Enterprise Information System (EIS) has arisen from the need to deal with the increasingly volatile requirements of modern largescale organisations. An EIS is a platform capable of supporting and integrating a wide range of activities across an organisation. In principle, the concept is useful and applicable to any large and SMEs, international or national business organisation. However, the range of applications for EIS is growing and they are now being used to support e-government, health care, and non-profit / non-governmental organisations. This chapter reviews research and development efforts related to EIS, and as a result attempts to precisely define the boundaries for the DOI: 10.4018/978-1-60566-856-7.ch021
concept of EIS, i.e., identifying what is and what is not an EIS. Based on this domain analysis, a proposal for using goal-oriented modelling techniques for building EIS is constructed; the proposal is made more concrete through illustration via an example.
IntroductIon This chapter focuses on a grand challenge for organisations: dealing with their evolving requirements and goals, and the impact of these changes on their Information Technology (IT). In particular, we are interested in large-scale organisations such as multi-national companies, or public-sector organisations, which are sometimes called enterprises in the literature. Organisations use IT in many different ways: to facilitate communication, to support commercial
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Exploring Enterprise Information Systems
transactions, to advertise, etc. In order to understand the effect of organisational and enterprise changes on use of IT, we start by defining the nature of an organisation. The current literature defines that an organisation is thus about a group of elements (human, automated system, structure, policy etc) that are arranged in a specific manner to accomplish a particular purpose (Buck, 2000; Laudon & Laudon, 2007; Terry, 1975). This definition applies to small, medium, and large-scale organisations. As we said earlier, a large-scale organisation can sometimes be designated by the word enterprise. However, we find it helpful to be more precise in defining enterprise; in our view, an enterprise is a large-scale organisation that is involved in, and must orchestrate, more than one independent business processes. We come to this definition by observing that many organisations, such as small IT houses, engage in a single business process. Identically some large organisations, such as online retailers, have a single business process. Organisations that have many different business processes, that must be coordinated in some way, such as Mitsubishi, have different requirements and different characteristics. Such organisations are often very large scale (e.g., public health organisations) and multi-national. In our view, the need to coordinate different business processes is a key characteristic in distinguishing an enterprise from another organisation. This paper investigates the validity of an assumption regarding the root of complexity of IT systems in complex organisations, where the IT systems support business processes directly. The assumption is that complexity is due to the following factors: • • •
36
Increasing size of IT systems and the organisation itself; The interactions between different IT systems; The involvement of many different organisations in the constructions and use of these IT systems; and,
•
The increasing rate of organisational and social change.
By investigating the validity of this assumption, and the importance of these factors, this chapter aims to contribute a better understanding of Enterprise Information Systems (EIS), their dimensions, their boundaries, and the challenges that arise in their construction and development. As part of this investigation, and as a result of the analysis of the literature that commences in the next section, we propose one key challenge for understanding and building EIS: •
Understanding diverse and volatile stakeholder requirements.
To aid in understanding these constructs, we propose the use of goal-oriented modelling techniques; this is discussed in the last section of this chapter. The rest of the chapter is organised as follow: The background section outlines the challenges in large-scale organisations as a motivation for discussing the systems that can address these challenges. A specific instance of large-scale organisations is an enterprise; hence, section 2 also discusses the requirements of IT systems for enterprises. One of the main difficulties in this area is the imprecise definition for EIS, and how an EIS differs from a general purpose IT system. Hence, we provide a working definition for EIS in this section. The Enterprise Information System section describes EIS in more detail by discussing stateof-the-art definitions and effective elements, such as business and organisation, based on a literature review. The future trend section describes goaloriented modelling techniques as a promising approach for attacking one of the main challenges of building an EIS by making the system more clear for its stakeholders. Section 4 also provides an example to clarify this idea.
Exploring Enterprise Information Systems
bAckGround A brief review of the history of enterprises and software systems helped us to construct a working definition for EIS. This working definition is our basis for presenting an argument about what is and what is not an EIS, and for refining our understanding of the objectives for this type of systems. This section therefore discusses some examples of EIS to shape the argument.
challenges of large scale software system Since the 1950s organisations have been developing computer-based information systems to support their business processes. Through improvements to IT, computer based systems have become more complex and yet more reliable; therefore increasing functional requirements have been placed upon these systems (Edwards, Ward, & Bytheway, 1993). However, building this kind of system has many challenges, including fundamental challenges regarding the construction of such systems, and the challenges of evolving systems to accommodate new requirements. Understanding the challenges of building such IT systems is essential for planning, designing, and development in order to provide as early as possible risk understanding, as well as understanding of the potential means for mitigation. The challenges of understanding and building large-scale software systems can be observed in both the public and private sectors. In the public sector, understanding the challenges, and reflecting based on these challenges during the development process, is important because failure (whether financial or otherwise) can result in significant damage to the reputation of the government. The National Audit Office/Office of Government Commerce lists the common causes of the project failure as follow (Projects, 2004):
1.
2. 3. 4. 5.
6.
7. 8.
Lack of clear connections between the project and the organisation’s key priorities, including agreed measures of success Lack of clear senior management and Ministerial ownership Lack of effective engagement with stakeholders Lack of skills and proven approach to project management Lack of understanding of and contact with the supply industry at senior levels in the organisation Evaluation of proposals driven by initial price rather than long term value for money (especially securing delivery of business benefits) Too little attention to breaking development and implementation into manageable steps Inadequate resources and skills to deliver the total portfolio
The first item in this list refers to the conceptual gap between project priorities and those of organisations; later in this chapter, more discussions address this challenge. In addition to these causes, hidden challenges threaten the IT projects; in particular the large-scale ones. For example, stakeholders should understand the conditions and limitations of the system. Having unreliable expectations from the system can move the domain of the project out of its limits and cause failure. Another important and hidden challenge is the lack of visualisation in the software systems. Software is not visible and tangible for the stakeholders; therefore, stakeholders cannot picture the functionality of the software before it actually built, which can cause unrealistic expectations and other undefined problems. For example, in the case of constructing a building, stakeholders can visualise the building by looking at its mockup; in the case of software there is no such a clear and easy to understand mock-up. Flexibility and supporting changes are other challenges that software systems should deal with.
37
Exploring Enterprise Information Systems
It is important to note that software systems can improve the speed of the processes in organisations and deal with the complex and well-defined processes. However, they are not intelligent enough to improve the business model; hence, software systems are not the solution for the illdefined business model. This challenge can be seen mainly in large-scale software systems that deal with businesses in organisations, such as EIS. The term Enterprise Information System is a common term in today’s industry, which suffers from misinterpretation and an imprecise definition. The rest of this chapter discuss this type of systems in more detail.
large scale software system: enterprise Information system A specific kind of large scale IT system is those that support enterprises. We call these software systems, EIS. The business aspect of organisations motivates engineers to develop systems that satisfy real requirements of organisations, particularly requirements associated with business processes. As a result, technologies such as Service Oriented Architecture (SOA) are currently popular in design and implementation of systems for businesses. However, the term business often implies a process that focuses on delivering financial value; but in practice, large-scale processes, and their associated IT systems, i.e. EIS, can support delivery of different kinds of outcome, which are not always directly linked to financial value. In fact, today’s businesses include both financial organisations as well as public organisations, which deliver services to the public. The success or value of these types of services is not always evaluated by the financial results they deliver. To commence our main discussion on EIS, we first discuss enterprises; in our view, an EIS supports business processes of an enterprise. It is important to have an understanding of an enterprise to understand what an EIS is.
38
What Is an enterprise? The literature is not rich on the history of enterprises; however, Fruin (1992) is one of the researchers that explained the history of enterprises briefly and with an eye on the Japanese revolution in industry and business. According to this book: The enterprise system appeared around the turn of the twentieth century when the factory system was effectively joined with a managerial hierarchy in production and distribution. It is the emerging coordination of previously independent organizations for production, management, and distribution-shop-floor, front office, and sales office- that generates the organizational innovation known as the Japanese enterprise system. (Fruin, 1992, p. 89). According to Fruin (1992), the notion of an enterprise system was established after the First World War, when new industries came to the market and many industries combined and amalgamated. Three types of enterprises were identified: National, Urban and Rural; which all have some common elements such as inter-firm relations, Marketing, Mode of Competition, Finance, Ownership, Management, Administrative Coordination, Government Relations. Mitsubishi is an example of an enterprise dating back to 1926; it integrates distinct yet affiliated companies, particularly Mitsubishi Heavy Industry, Mitsubishi Warehousing, Mitsubishi trading, Mitsubishi Mining, Mitsubishi Bank, Mitsubishi Electric, Mitsubishi Trust, Mitsubishi property, Mitsubishi steel, Mitsubishi Oil, Nippon Industrial Chemicals, and Mitsubishi Insurance (Fruin, 1992). There are many other examples of enterprises including Boeing, General Electric, Kodak, IBM, Norwich Union, Samsung, and Philips. From a consumer or client’s point of view, these enterprises are often perceived as involving only one single organisation (e.g., Mitsubishi’s car division).
Exploring Enterprise Information Systems
Another example in this area is General Electric, which has independent divisions focusing on healthcare, aviation, oil and gas, energy, electrical distribution, security, and many others (GeneralElectric, 2008). History shows that enterprises have existed from the turn of the twentieth century; nevertheless, the concept still suffers from an unclear definition.
conclusion Today’s large-scale IT systems increasingly provide support for the business processes of organisations. The aim of using information systems is to increase the automation of the processes within organisations. Enterprises integrate organisations, departments, and even entire businesses to achieve shared goals. Processes within enterprises can benefit from IT infrastructure; in this section, we have argued for calling such IT infrastructure an EIS.
Working definition From the discussion on the history of enterprises and challenges of large scale software systems, we see that EIS are computer-based systems that satisfy the requirements of enterprises. EIS are designed and developed for enterprises rather than a single business unit or department. They can deal with the problems of a large organisation (which includes different SMEs or different partners), and they can deal with the problems of a medium or small enterprise (which is an organisation that includes different departments). This working definition will be refined in later sections. After providing a brief background for EIS, we will discuss the definition in more detail.
enterPrIse InFormAtIon systems Introduction Based on the working definition developed in the last section, in this section we focus on refining the definition to include additional detail, particularly in the organisational and business context. As a result, this section proposes a concrete definition for EIS. To help explain the definition further, and partly to validate it, we relate it to well-known examples of organisations.
challenges The notion of enterprise is a widely used term for instance in the case of Mitsubishi. However, a precise definition of what constitutes an enterprise – and hence, what precisely constitutes an EIS – is still missing. One of the main difficulties in defining what is an EIS is in distinguishing it from any other large-scale software system. For example, perceived challenges in designing and developing an EIS will arise in the form of having to meet fixed costs of development, in dealing with volatile requirements, and in managing the complexity of the EIS. However, these are also challenges all kinds of large-scale software systems. Therefore, we do not aim to enumerate all of the design and development challenges of EIS; instead, this section will address one of the essential challenges, which is unclear definition for EIS; hence, we aim to propose a definition for this term. To define EIS, this study reviewed the current definitions found in the literature; the next section will cover some of them.
state of the Art definition Organisations continue to find that they need systems that span their entire organisation and tie various functionalities together. As a result, an
39
Exploring Enterprise Information Systems
understanding of enterprise systems is critical to succeed in today’s competitive and ever changing world (Jessup & Valacich, 2006). A good definition for EIS introduced it as a software system with the specific ability to integrate other software systems of an organisation. Enterprise systems integrate the key business processes of a firm into a single software system so that information can flow seamlessly through the organization, improve coordination, efficiency, and decision making. Enterprise software is based on a suite of integrated software modules and a common central database. The database collects data from and feeds the data into numerous applications that can support nearly all of an organization’s internal business activities. When new information is entered by one process, the information is made available immediately to other business processes. Organization, which implements enterprise software, would have to adopt the business processes embedded in the software and, if necessary, change their business processes to conform to those in the software. Enterprise systems support organizational centralization by enforcing uniform data standards and business processes throughout the company and a single unified technology platform. (Laudon & Laudon, 2007, p. 382) This definition seems very specific on what is an EIS; however, there are points that are ignored by this definition. For example, the argument that mentioned when new information is entered by one process, the information is made available immediately to all other business processes. However, it can be argued that the information should be available to the other processes depending on their access domain. By this, we mean the level of access to the information should be different from process to process. It is not reasonable to expose information to the processes, which do not require it. Therefore, based on the access level of processes, only the suitable and updated informa-
40
tion should be visible. This security policy does not have any contrast with the idea of enterprise processes, which their goal is to let the information flow seamlessly. Moreover, (Strong & Volkoff, 2004, p. 22) defines an ES as a system which its task is to support and “integrate a full range of business processes, uniting functional islands and making their data visible across the organization in real time”. This definition adds to the previous definition, the fact that the data and information entailed by the system should be understandable by all its business processes. Another definition for enterprise systems is based on legacy systems; a legacy system is an existing computer system or application program, which continues to be used because the company does not want to replace or redesign it (Robertson, 1997). Most established companies, who have been using a system for long time, are in this group. Legacy systems mainly suffer from deficiency of documentation, slow hardware and difficulties in improvement, maintenance and expansion. However, there is evidence that overtime EIS replaces the stand alone applications and the functionality of legacy systems (Strong & Volkoff, 2004). In contrast to enterprise systems, legacy systems are not designed to communicate with other applications beyond departmental boundaries (Jessup & Valacich, 2006) even if middleware offers a potential solution to adapt the novel parts with the legacy system. Nevertheless, regarding the price of developing a middleware, the following question comes to mind: can middleware alone solve the problem of integrating new subsystems with a legacy system? In short, the common idea in the existing definitions illustrates that an EIS is about various businesses, business processes, organisations, information systems, and information that circulates across the enterprise. In other words, EIS is about the businesses model in the organisation. Therefore, the two main elements of EIS are organisation and business. The two following sections cover these points.
Exploring Enterprise Information Systems
organisation
•
The EIS definitions that we extracted from the literature linked the EIS to organisations (Laudon & Laudon, 2007; Strong & Volkoff, 2004) or large companies (Jessup & Valacich, 2006) and we assume that in both cases the definitions refer to the same concept: organisation. Based on this assumption, it is vital to review the different types of organisations that can influence the different types of EIS. Therefore, this section discusses categorise of organisations based on their goals. Elizabeth Buck categorises organisations in three groups (Buck, 2000):
All the above goals focus on increasing the market demands for products or services. Examples of not for the profit organisations could be charities, mutual societies, etc, which provide some services for the society. The customers are also the member of the mutual society; therefore, they are the owner of the business. The value for money rule exists in this group too. The usual way to evaluate the success of this group of organisations is to measure how well they achieve their goals considering the available resources. Table 1 illustrates some of the characteristics of organisations that were described; it also summarises the different type of organisations. By understanding the categories of organisations, we can focus on understanding their goals. By knowing the goals of organisations we can design and develop an EIS that satisfy the defined requirements and goals; but there are other questions in this area: what are the EIS’ goals? Are the goals of EIS similar to the goals of organisations? It seems that EIS’ goals could be a sub set of the organisations’ goals. When the EIS’ goals get closer to the goals of organisations it could become a better EIS. The final and optimistic goal for an EIS is to improve the goals of the organisation it services. However, defining the goals of an EIS is the path for analysing and developing the organisation’s business model and thus the next section will explore the role of business in the definition of EIS.
• • •
Public Organisations Private Organisations Not for Profit Organisations
The public organisations include central or local government, where elected members (e.g., minister) will decide on the goals of organisations, and may influence how goals are achieved. The aim of this type of organisation is to supply services to or for the public, considering a ‘value for money’ rule. Examples of this type of organisation can be health service, prison, police, social security, environmental protection, the armed forces, etc. Individuals or other private organisations own private sectors organisations. This group of organisations can have the following goals: • •
Satisfy their customer Satisfy their staff
Satisfy their owners
Table 1. Organisations’ categories Type of Organisation
Decision Makers
Value for Money
Owner
Goal(s)
Example
Public
Elected members
Yes
Public
Supply Services to or for the Public
UK central Government
Private
Share holders
No
Share holders
Satisfy customers/ Satisfy staff/ Satisfy owners
Mitsubishi
Not for Profit
Elected Manager
Yes
Members/ Customers
Provide some services for the society or members
NCH (Children Charity)
41
Exploring Enterprise Information Systems
business Another main factor that influences the architecture and functionality of an EIS is the business model (Figure 1). Supporting the strong relationship between business processes is the aim of ES. In fact, the ability to define various business processes in enterprise systems is the element that distinguishes them from normal systems for a company or a department; for example BMW involves in a diversity of businesses to produce cars or engines for other car brands (e.g. RollsRoyce), in addition to building bicycles and boats. A normal system in a company contains components and subsystems that belong to one specific business and satisfy its requirements. A normal company may need to contact other companies to continue its business but involving partners or suppliers is not their main concern. In contrast to normal company where the focus is
on one particular business, an enterprise focuses on a collection of business processes which could be relevant to each other or not but all of them are under the arch of the main principals of the enterprise. Indeed, making profit is not one of the essences of business model. There are non-profit governmental or non-governmental organisations such as healthcare organisations that can have their own business model which deals with the process of treating patients. The presentation of Enterprise System in this chapter is not about detailed implementation of business functions; its focus is mainly about a very top-level view on the whole business model of an organisation as defined by Clifton (2000): business involves a complex mix of people, policy and technology, and exist within the constraints of economics and society (Clifton, Ince, & Sutcliffe, 2000, p. 1).
Figure 1. Business model [based on (Kaisler, Armoiur, & Valivullah, 2005)]
42
Exploring Enterprise Information Systems
Figure 1 illustrates the general structure of a business model where the business model includes business processes and business functions. Business processes are “a set of logically related tasks performed to achieve a defined business outcome” (Davenport & Short, 1990, p. 100). For example, in the case of BMW, the business processes is putting new orders for part suppliers. When there is a new demand for specific car (e.g. model Z5), this new market request creates a business event that triggers a set of business processes such as increasing the amount of resources for producing the Z5 (e.g. BP2 in Figure 1), and putting new orders for parts suppliers. Each of these business processes is subdivided into different business functions (e.g. BF2 and BF3 in Figure 1). Examples include the functions required for inputting new orders such as checking the parts suppliers’ ability for new demands, organising the time that is needed for each part to arrive to assembly line, etc. According to (Kaisler, Armoiur, & Valivullah, 2005, p. 2) “business processes must be modelled and aligned across the function, data and information systems that implement the processes”. Therefore, the term business function in our research refers to the functionality that is required for implementing a business process. Figure 1 is a simple explanation for business process model. Each of these business functions can trigger a business process too. Moreover, the business processes can breakdown to other business processes, which is not shown in this diagram to keep it simple to understand. The aim of this diagram is mainly to explain business processes and functions in a general business model. Understanding business models is helpful for developing EIS because their role is to integrate a full range of business processes (Strong & Volkoff, 2004). Before defining the concept of EIS, Legacy systems were the type of systems that were developed to handle the requirements of organisations (Robertson, 1997). However, legacy systems are not designed to communicate with other applications beyond departmental boundar-
ies (Jessup & Valacich, 2006); hence the concept of EIS has grown to fill this gap. In short, the common idea in existing definitions illustrates that an EIS amalgamates concerns from various businesses, business processes, organisations, information systems, and information that circulate across an enterprise. In other words, it is about the business models of the organisation. However, a definition for EIS that just emphasises the financial profit side of businesses for organisations is out of date. In the next section, a definition that considers other aspect of organisations, the domain and objectives of EIS is proposed.
enterprise Information system definition This section proposes a definition for EIS, which is the result of our analysis of the state-of-the-art definitions and of industrial case studies. The definition that considers business and organisational aspects of EIS is as follow: An Enterprise Information System is a software system that integrates the business processes of organisation(s) to improve their functioning. Integration of business processes plays an important role in this definition. Integration could be accomplished by providing standards for data and business processes. These standards will be applied to various part of the system such as a database or clusters of databases. As the result of such integration, information may flow seamlessly. Another point in this definition is the software characteristics of EIS. At this stage, we consider EIS as a type of Information System; therefore, this software system includes both humans and hardware. The next term, used in the definition is organisation. Different types of organisations are discussed earlier in this chapter. Organisations may include an organisation with its partners, or a group of organisations. Table 2 refines the above definition
43
Exploring Enterprise Information Systems
Table 2. EIS boundaries, objectives and challenges Objective
Integrity of the organisation and collaborators Seamless Information flow Suitable access to data and information for various stakeholders Matching the software system structure with organisation structure
Goal
Improving coordination, efficiency, and decision-making of business process in an organisation
Domain
Covers the internal and external business activities of organisation
Challenge
Security challenges that should be considered carefully for organisations’ processes. Otherwise, mixing the required information of one business process with another one can cause problem for the organisation Improve flexibility in organisation processes
and describes what we propose as the objectives, goals, domain, and challenges of EIS. In addition, Figure 2 describes the definition of EIS graphically. Note that BP in this figure are business processes. As it can be seen in this figure, each organisation contains various business processes. Moreover, in Figure 2, the database could be a cluster of databases; however, it is highly likely that there would be a single interface to exchange data with the database without having a Figure 2. An enterprise information system
44
concern about where the data is and what are the various resources. As can be seen in this figure, the bigger rectangle describes the boundaries of EIS, which is flexible. The two following sections aim at continuing the discussion about EIS by presenting some examples in this area. The results of reviewing these examples lead to a better clarification of what is an EIS and what is not.
Exploring Enterprise Information Systems
examples of enterprise Information system
What cannot be an enterprise Information system
The review of the industrial cases of what might be considered as an EIS moves our discussion toward the example of Mitsubishi. As was mentioned earlier, Mitsubishi with more than 400 companies all around the word is an example of enterprises (Mitsubishi, 2007). Thirty top-level managers manage all the individual Mitsubishi’s companies. This does not mean that each company does not have enough freedom to make their own decisions; it means that this group of thirty managers will make some of the top-level decisions and they provide the high-level standards that all these companies should consider. In this case, if there is a computer based system that links various parts of the Mitsubishi organisation (including highlevel managers) together and makes information flow seamlessly between them, then we view this system as an EIS. Developing such a system is a large and complex problem; hence, there is a need for powerful, reusable solutions to develop this type of system in a manner that can benefit all of the enterprise. Another example in this area is the infrastructure being developed to support the National Health Service (NHS) in the United Kingdom where the information systems being developed to support management of patients’ records and prescriptions can be considered as an EIS, because such IT infrastructure aims to connect independent departments within and outside of the NHS. While we are looking at the NHS, which is a public sector organisation, we can raise e-Government as another example of public sector organisation that may be supported by and hence benefit from, EIS infrastructure because it connects various governmental organisations or departments together to let information flow seamlessly between them.
As we were discussing the public, private, and governmental examples for an EIS, the next step is introducing some examples of Information Systems that are not EIS according to our definition. eBay is one of the well known international Information Systems that focuses in the auction industry. This online market which involves around 147 million people (Gopalkrishnan & Gupta, 2007) provides a platform for individuals or companies to trade their products or services; but it does not connect the business processes of organisations together. Therefore, according to our definition, an EIS connects different business processes of organisations or departments of organisation together to make the information flow seamlessly and thus it seems that based on this characteristic of EIS, eBay is not an EIS. The information system is the element that processes data and put them online, there is no evidence of connection between business processes because it is not a requirement in this Information System. The same argument can be followed in the case of Amazon, therefore even though it is large-scale and international online shop but it is not an EIS.
conclusion In short, this section described EIS in more detail by providing the definition for EIS. Defining any kind of system is essential for defining its domain and objectives. Without this basic information, the researches on the similar area will not be consistent. However, there is no claim that the given definition is the only definition for EIS. This definition is based on our studies, observations, interviews, and comparisons on the current theoretical and practical definitions and case studies. Part of this ongoing work is presented in this chapter.
45
Exploring Enterprise Information Systems
To make the results of our study on the definition of EIS more clear, two examples are discussed in this section. The earlier examples describe the case that can be an EIS and the case that cannot be an EIS. This categorisation is based on this chapter’s criteria, which is discussed in the given definition. Hence, each of these cases could be the objective for more discussions on the possibility of being an EIS or not. Considering different point of views and the context of arguments, one Information System can be an EIS or not. Therefore, it is crucial to consider the writers’ point of view and given definition, in the preceding examples. After discussing what can be an EIS, the next section will focus on an approach for developing this type of system.
Future trends Goal-based and goal-oriented thinking is used to plan for the future or to solve problems (Kim, Park, & Sugumaran, 2006). The concept of using goal-oriented techniques has been proposed as one possible way to manage some of the difficulties associated with developing large-scale complex systems (Kavakli, Loucopoulos, & Filippidou, 1996), particularly the challenge of clearly identifying and specifying requirements. As we discussed in the previous section, an EIS is an instance of large-scale complex system. This section promotes the idea of using goal-oriented modelling techniques for developing EIS by briefly discussing them and their roles in defining EIS system requirements. We will summarise our discussion on goal-oriented techniques by presenting an example of goal graph.
Goal oriented techniques Goal oriented techniques have been widely discussed in the requirement engineering domain (T. P. Kelly, McDermid, Murdoch, & Wilson, 1998; Axel van Lamsweerde, 2001; A. V. Lam-
46
sweerde, 2004). Goals are also used in the safety and security research community – for example, to present safety cases and safety arguments (T. Kelly, 2004; T. Kelly & Weaver, 2004) - and in software assessment (Weiss, Bennett, Payseur, Tendick, & Zhang, 2002). Kelly (1998) defined a goal as ‘requirements statement’, Lamsweerde (2003) used goals as criteria for deriving software architecture. Kim et al (2006) defined goal model from (Axel van Lamsweerde, 2003) point of view as a criteria for designing the architecture for systems; therefore, the aim of software architect is to implement a system based on the architecture to accomplish goals (Kim, Park, & Sugumaran, 2006). Logically goals are the motivation for developing a system; therefore, all the stakeholders should have a clear understanding about the goals of the system. In addition, the goals of the system should be realistically defined before continuing any other step of the development. There are attempts to show the goals in graphical notations such as GSN (Timothy Patrick Kelly, 1998), Kaos (Axel van Lamsweerde, 2001), and (Kim, Park, & Sugumaran, 2006). Moreover, Kaos defines the formal textual notation to describe the goals in addition to informal text. This attempt is respectful because it considers the larger group of audience to understand and benefit from the goal model. Different stakehold ers require different forms of presentation for their goals. For example, the high-level manager may not require seeing a formal explanation of the goals because they may not understand it; however, they can better understand an informal explanation in a simple diagram. On the other hand, there is a good possibility that a programmer’s team, requires the formal explanation for the goals in detail to understand and implement the system in the correct and expected way. It is important to bear in mind that goal diagrams are aimed at making the system more clear to different stakeholders, therefore goal-oriented ideas should prevent adding more confusions for different stakeholders. Any approach that makes
Exploring Enterprise Information Systems
the goals of the system more clear for stakeholders should be considered; it can be different goal models for different stakeholders. The next section explains an approach for designing a goal model. This approach is very high level without explaining the details. The aim is to introduce a possible approach for developing a goal model to readers. This approach benefits from the information in similar studies in this area such as (Timothy Patrick Kelly, 1998; Kim, Park, & Sugumaran, 2006; Axel van Lamsweerde, 2001).
designing a Goal model One of the main reasons for developing unsuccessful software systems is unrealistic planning and design. Hence, the aim of goal-oriented approaches, as discussed in the previous section, is to provide an environment such that different stakeholders can understand the goals at different levels of abstraction and decomposition. One way to accomplish this is to use a graphical modelling language, such as GSN; another way is by documenting the requirements and design precisely and accurately using a textual format. It is also possible to present the prototype of the system and discuss it with various stakeholders. All these approaches and other similar ones could be beneficial for different type of systems. The approach that is discussed in this section is a simple approach for developing a goal model. The aim is to develop a goal model that can present the system’s high-level goals clearly. Furthermore, it does not involve the details of the goals or their descriptions; this can help to provide an understandable top down model for high-level goals of the system for non-technical decision makers. The basics of this approach for designing a goal model is to create a list of goals, a list of actions, and a list of occurred problems. Goals were defined earlier; actions according to (Kim, Park, & Sugumaran, 2006, p. 543) “are the atomic
unit for representing software behaviour, which can be observed at runtime and has a start and end point of execution”. This paper also argues that most methods in the class diagram can be action, but because the runtime of actions should be observable, the size of the action should be restricted in a manner that makes it possible to be observed in the software model. The issues in this case are the challenges and difficulties that occur when developers consider the implementation and the execution of a system. These challenges can be a technical difficulties, or goal conflicts, etc. After producing the goals, actions, and problems lists, the relation between these elements should be created. The notation here is similar to the notation in (Kim, Park, & Sugumaran, 2006), which is as follows: (Gz, An) → Px A represents an action, G represents a Goal, and z and n are the symbolized identification of random variables that present the ID of the goal, for instance, it can be G 1.1.2, which means goal with ID 1.1.2. An example of action could be 1.1.2/1, which presents the required action that can be done to achieve this goal. The next notation illustrates the relationship between a goal, action, and problem. Following is an example of this notation: Px → (Gz, An) The above notation means, the Action with ID n which is required for satisfying Goal with ID z can cause the problem with ID x. This notation describes the case where action that belongs to a goal causes a problem or problems. The next notation describes the case that a problem can be solved using a specific action: Px → (Gz, An)
47
Exploring Enterprise Information Systems
The above notation means to solve the problem P with ID x, the Goal G with ID z is required, and to satisfy this goal Action with ID n should be done. In the case that the developer team does not know the required action yet, action n (An) can be replaced with ‘?’. Before starting to implement the system, all the question marks (?) should be filled with actions as solutions to satisfy goals. Nevertheless, in the case that as the result of limitations in the technology, resources, etc. one or more question mark (?) cannot be replaced by solution, there could be a bottom-up check to see if the system is still worth implementing; considering the unsolved problem(s), the functionality of the system should not rely on the non available solutions. However as it was discussed before, a goal model should have different levels of abstractions for different users. Hence, designers should avoid destroying the purpose of the goal model, which is to make the systems goals clear for stakeholders by mixing and presenting all the information to the ones who does not require it. The next section will provides an example of goal model for stroke care.
example The aim of the case study is to design the goal model for a system that collects the data of treatment for a specific serious condition. The data can be collected from different sources such as doctors, researchers, nurses, emergency staff, etc. Moreover, each of these stakeholders can have a different way of communicating with the database, for instance, laptop, paper, phone, etc. The role of this system is to collect the data from various sources, analyse them and provide some data as an output for different purposes. Based on the case study in (Bobrow.D.G, 2002) we can call this system a knowledge sharing system. Figure 3 illustrates the described system. In this figure the boundaries of the described system
48
is shown as a box surrounding it. The big arrow in the left hand side of the box illustrates the fact that this system is one of the information systems in the defined enterprise. The enterprise in this figure is shown using a pyramid, which is mainly a symbol of organisation. To make this figure simple and clear we did not include the option that this EIS can be shared and used with other enterprises around the globe. Note that by having a design for EIS, we try to have a big picture of enterprise that includes the possible changes and extension in the future. The EIS does not have a local design that cannot be used when changes occur. The current solution for extending a system or merging systems is mainly developing middleware, which enterprise architect should avoid relying just on middleware. Considering that in some cases middleware can be so expensive that the organisation’s decision makers may decide to use manual paper based system instead. After drawing an overall view of the requested system, the goal of the system should be defined. Each goal should have its own action, which acts as a solution for the system and the possible problems. Figure 4 illustrates the goal diagram for this system. This diagram is very high-level, which targeted non-technical decision makers. This diagram is the starting point for creating a complete goal model for this system. As can be seen in this diagram, goals have their unit identity, which in this case is shown by numbers. These numbers makes the traceability of goals possible within this model. In addition, it is possible to implement it in tools for drawing diagrams. This is a AND-OR graph and it means the parent goal with OR child can be satisfied when at least one of the child goal reach to the solution. It is similar to AND-OR in logic mathematic. Furthermore, goal graph is a weight graph; hence, the goals in the same level can be prioritised over other goals. Prioritizing goals is helpful in different context. For example, in allocating resources or in some cases, when satisfying a lower priority goal is
Exploring Enterprise Information Systems
Figure 3. Example of knowledge sharing system
Figure 4. Goal graph
49
Exploring Enterprise Information Systems
depend on satisfying the higher priority goal. In general, Figure 4 shows the basic requirements for the goal graph. We emphasise that the aim of this graph is to provide the high-level clear image of the system’s goals and present it to the stakeholders to be used for brainstorming for example.
conclusion In conclusion, this section proposed the future work for the study on how to develop EIS. The fact that developing required components for EIS can be similar to other large-scale complex system makes this field of work valuable; because finding better solutions for different challenges of Information Systems provides a platform for developing various kinds of suitable systems. This effort and study on EIS provides an easier and safer life for individuals and organisations that benefit from this type of IT products. It influences the government’s performance, it provides better innovative platform for industries. All these reasons bestow enough motivations for us to continue improving this study.
conclusIon By looking at various ways that the word ‘enterprise’ is used, it becomes clear that there is an ambiguity in this term. Yet this term and others such as ‘Enterprise Architecture’, ‘Enterprise Information System’, etc. are increasingly used. This fact encourages us to look at these terms and clarify them for future use in our research and other relevant ones. The simple definition for enterprise is an entity engaged in economic activities. This definition does not cover requirements for defining an EIS. The argument in this chapter illustrates that an EIS covering the requirements of any entity engaged in economy activity is simply an IS and they can hardly be categorized as a separate group with the name of EIS. The fact that the number of people employed by an organisation can increase the complexity of the software system in some
50
cases is hardly the leading factor in developing an EIS. The basic requirement for research on how to improve the development of EIS is to achieve more knowledge on what an EIS is. Consequently, the main objective of this chapter was to explore the boundaries of EIS; this was achieved by developing a definition for EIS. This definition captured what we believe are the important characteristics that should be considered while we attempt to build an EIS; characteristics such as organisations, their goals, business processes, and the business model. None of these characteristics is based on the size of the organisation; therefore, it can cover different sizes of enterprises, small, medium, or large. Accordingly, this chapter did not use a specific term such as SME, Small and Medium Enterprises, to define EIS. Any discussion of EIS encompasses a number of facets, including general IT system development, requirements, organisational theory, and distributed systems technology. Our aim is to more precisely define what an EIS is, and what it is not, to assist in providing better methodologies and techniques for building such increasingly important software systems. We believe that it is clear that the volatile requirements of modern organisations require special business processes, and these business processes cannot be fully achieved without IT systems and in some cases without an EIS. A high-quality EIS can provide a connection between the different, independent business processes in an enterprise. As discussed, we argue that goal-oriented modelling techniques are important for helping to understand what is required for a business or organisation, and for understanding what an EIS should provide. Thus, we argue that a first step for developing a system for an enterprise is to find and justify the enterprise’s goals. When all the stakeholders have a clear idea about the goals of the enterprise, their expectations will be realistic in principle; the desired system’s boundary should be more precisely defined, and in principle building the system should be possible. We do not
Exploring Enterprise Information Systems
claim that following this approach will provide a full guarantee for developing a suitable EIS: such systems are always challenging to build, and goal-oriented techniques only tackle an important part of a large problem. Additional research and experiments are needed to identify what further techniques are needed to supplement goal-oriented modelling for designing, implementing, deploying, and maintaining Enterprise Information Systems.
AcknoWledGment We would like to thank Dr. Fiona Polack for her valuable suggestions.
reFerences Bobrow.D.G, W. J. (2002). Community knowledge sharing in practice: The Eureka story. Society for organizational learning and Massachusetts Institute of Technology, 4, 47-59. Buck, E. (2000). Different Types of organisation. NEBS Management/QMD Ltd. Retrieved September 1, 2008 from http://www.teamsthatwork. co.uk/Organise%20&%20improve%20team%20 work%201.pdf Clifton, H., Ince, D. C., & Sutcliffe, A. G. (2000). Business information systems (6th ed.). Essex, England: Pearson Education Limited. Davenport, T. H., & Short, J. E. (1990). The New industrial engineering, information technology and business redesign. In M. Lewis & N. Slack (Eds.), Operations management: Critical perspectives on business and management (pp. 97-123). London and New York: Routledge. Edwards, C., Ward, J., & Bytheway, A. (1993). The essence of information systems (2nd ed.). London: Prentice Hall.
Fruin, M. W. (1992). The Japanese enterprise system. New York: Oxford University Press. GeneralElectric. (2008). Product and services. Retrieved September 1, 2008 from http://www. ge.com/products_services/index.html Gopalkrishnan, J., & Gupta, V. K. (2007). eBay: “The world’s largest online marketplace” - A Case Study. Conference on Global Competition and Competitiveness of Indian Corporate (pp. 543-549). Jessup, L., & Valacich, J. (2006). Information systems today, why is matters (2nd ed.). NJ: Pearson Education, Inc. Kaisler, S., Armoiur, F., & Valivullah, M. (2005). Enterprise architecting: Critical problems. Paper presented at the 38th Annual Hawaii International Conference on System Sciences, Island of Hawaii, HI. Kavakli, E. V., Loucopoulos, P., & Filippidou, D. (1996). Using scenarios to systematically support goal-directed elaboration for information system requirements. Paper presented at the IEEE Symposium and Workshop on Engineering of Computer Based Systems(ECBS ‘96), Friedrichshafen, Germany. Kelly, T. (2004). A Systematic approach to safety case management. Paper presented at the SAE 2004 World Congress, Detroit, MI. Kelly, T., & Weaver, R. A. (2004). The goal structuring notation - A safety argument notation. Paper presented at the 2004 International Conference on Dependable Systems and Networks (DSN 2004), Florence, Italy. Kelly, T. P. (1998). Arguing Safety- A systematic approach to managing safety cases. University of York, York.
51
Exploring Enterprise Information Systems
Kelly, T. P., McDermid, J., Murdoch, J., & Wilson, S. (1998). The goal structuring notation: A means for capturing requirements, rationale and evidence. In A. J. Vickers & L. S. Brooks (Eds.), Requirements engineering at the University of York: University of York. Kim, J. S., Park, S., & Sugumaran, V. (2006). Contextual problem detection and management during software execution. Industrial Management & Data Systems, 106, 540–561. doi:10.1108/02635570610661615 Lamsweerde, A. v. (2001). Goal-oriented requirements engineering: A guided tour. Paper presented at the 5th IEEE International Symposium on Requirements Engineering (RE’01), Toronto, Canada. Lamsweerde, A. v. (2003). From system goals to software architecture. Formal methods for software architectures ( . LNCS, 2804, 25–43. Lamsweerde, A. V. (2004). Goal-oriented requirements engineering: A roundtrip from research to practice. Paper presented at the 12th IEEE Joint International Requirements Engineering Conference(RE’04,), Kyoto, Japan.
Mitsubishi. (2007). About Mitsubishi. Retrieved September 1 2008, from http://www.mitsubishi. com/e/group/about.html Projects, T. C. C. I. (2004). The challenges of complex IT projects. Retrieved September 1, 2008, from http://www.bcs.org/server_process. php?show=conWebDoc.1167 Robertson, P. (1997). Integrating legacy systems with modern corporate applications. Communications of the ACM, 40(5), 39–46. doi:10.1145/253769.253785 Strong, D. M., & Volkoff, O. (2004). A roadmap for Enterprise system implementation. IEEE Computer Society, 37, 22–29. Terry, P. (1975). Organisation behaviour. Industrial & Commercial Training, 7(11), 462–466. doi:10.1108/eb003504 Weiss, D. M., Bennett, D., Payseur, J. Y., Tendick, P., & Zhang, P. (2002). Goal-oriented software assessment. Paper presented at the 24th International Conference on Software Engineering (ICSE ‘02), Orlando, FL.
Laudon, J. P., & Laudon, K. C. (2007). Management information systems: Managing the digital firm (10th ed.). Prentice Hall. This work was previously published in Social, Managerial, and Organizational Dimensions of Enterprise Information Systems, edited by Maria Manuela Cruz-Cunha, pp. 415-432, copyright 2010 by Information Science Reference (an imprint of IGI Global).
52
53
Chapter 1.4
Enterprise Systems in Small and Medium-Sized Enterprises Sanjay Mathrani Massey University, New Zealand Mohammad A. Rashid Massey University, New Zealand Dennis Viehland Massey University, New Zealand
AbstrAct The market for enterprise systems (ES), continues to grow in the post millennium era as businesses become increasingly global, highly competitive, and severely challenged. Although the large enterprise space for ES implementation is quite stagnated, now all of the ES vendors are focusing on the small to medium-sized enterprise (SME) sector for implementations. This study looks at the current ES implementation scenario in the SME sector. The purpose of the study is to gain insights into what is a typical case of ES implementation and to understand how current implementations in the SME sector differ from the earlier implementations in the large enterprise sector through a perspective DOI: 10.4018/978-1-59904-859-8.ch013
of ES vendors, ES consultants, and IT research firms in a NZ context. Implications for practice in implementation processes, implementation models, and organizational contexts are discussed.
IntroductIon Enterprise systems (ES), also known as enterprise resource planning (ERP) systems, are large, complex, highly integrated information systems designed to meet the information needs of organizations and are, in most cases, implemented to improve organizational effectiveness (Davenport, 2000; Hedman & Borell, 2002; Markus & Tanis, 2000). These are comprehensive, fully integrated software packages supporting automation of most standard business processes in organizations including extended
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Enterprise Systems in Small and Medium-Sized Enterprises
modules such as supply chain management (SCM) or customer relationship management (CRM) systems. ES applications connect and manage information flows across complex organizations, allowing managers to make decisions based on information that accurately reflects the current state of their business (Davenport & Harris, 2005; Davenport, Harris, & Cantrell, 2002). In a more integrated and global market, extended ES offers new functions and new ways of configuring systems, as well as web-based technology to establish the integrated, extended business enterprise (Shanks, Seddon, & Wilcocks, 2003). The market for ES continues to grow despite much speculation on its future in the post millennium era. Although most large enterprises have completed their ES implementations by now, the ES market continues to grow in the small and medium-sized enterprises (SME) sector. A number of research studies have been conducted to establish and understand the critical success factors for ES implementations (e.g., Allen, Kern, & Havenhand, 2002; Bancroft, Sep, & Sprengel, 1998; Holland & Light, 1999; Parr & Shanks, 2000; Plant & Willcocks, 2006; Sarker & Lee, 2000; Scott & Vessey, 2002; Skok & Legge, 2001; Sumner, 1999; Yang & Seddon, 2004). However, there has been little research that examines ES implementation at the strategic decision-making process level (Viehland & Shakir, 2005) and compares current implementations in the SME sector with earlier implementations in the large organizations. The purpose of this study is to examine the current ES implementations scenario in New Zealand. The main objectives of this study are to explore what is a typical case of ES implementation and to understand how current implementations in the SME sector differ from the earlier implementations in the large enterprise sector. The study does so through a practitioners’ perspective, with interview data collected from ES vendors, ES consultants, and IT research firms who are actively engaged in ES implementation. This approach differs from the organizational approach usually found in the
54
literature. This is a replication study following a similar approach used by Shakir (2002), who also investigated aspects of ES implementation in the NZ vendor-and-consultant community. The focus of that study was to identify key drivers influencing ES adoption and implementation (e.g., Shakir and Viehland, 2004) whereas the focus of the current study is to understand how current implementations in the SME sector differ from the earlier implementations in the large enterprise sector. The current study extends and builds upon existing ES research. Semi-structured interviews were conducted with key players of ES implementations in New Zealand including ES vendors, ES consultants, and IT research firms to explore the current ES implementation scenario. Several measures such as the number of users, modules implemented, cost of implementation, number of sites/locations where implemented, industry type, organization size, implementation phases, time to implement, implementation partners, and levels of customization were discussed to understand a typical case of ES implementation and current implementation practices. The empirical findings are analyzed and reported in this paper. This study has been conducted in a New Zealand (NZ) context which can be extended to show current trends worldwide.
reseArcH metHodoloGy Using a qualitative research methodology, data were collected by way of semi-structured interviews with ten key respondents in the ES implementation industry. The interviews were carried out between February and August 2006. The key respondents were senior ES consultants or senior managers in the organizations which are key players in the field of ES in New Zealand, principally major ES vendors, ES consultants, and IT research organizations (see Table 1). The positions of the respondents included: director professional services, consulting manager, manag-
Enterprise Systems in Small and Medium-Sized Enterprises
Table 1. Key respondents for the study ES Vendors (Flagship ES products)
ES Consultants
IT Research
SAP NZ (SAP)
PricewaterhouseCoopers NZ
Gartner Limited NZ
Oracle NZ (Oracle, J.D. Edwards, PeopleSoft)
Ernst & Young NZ
IDC NZ
Microsoft NZ (Dynamics (earlier Navision))
KPMG Consulting NZ
Infor NZ (Mapics, SSA Global (earlier BaaN))
EMDA NZ
ing director, consulting practice director, partner group manager, vice president, consulting partner, general manager, and business consultant. The purpose of the interviews was to seek insights from experienced ES stakeholders and professionals in answering what is a typical ES implementation in New Zealand and what are the current ES implementation practices? Questions were asked to extract information such as the number of users, the modules implemented, cost of implementation, number of sites where ES were implemented, type of industry and the size of the organizations in terms of number of employees and revenue. Contact was first established with the respondents through email and by phone. An introductory letter explaining the study briefly and seeking appointment for an interview was then sent to the respondents. On receipt of confirmation, the research information sheet along with the questions was sent prior to the interview. The answers were then discussed during the interview. The respondents discussed ES implementations based upon their perspective and experience in terms of their ES applications, their clients, and their implementation methodologies. Ten faceto-face meetings took place at the respondent’s organizations with one interview from each firm. The interviews lasted between 60 and 90 minutes each. The interviews were tape recorded and transcribed immediately after each interview. The empirical findings were analyzed using the Nvivo 7.0 qualitative software tool and the inferences reported.
tyPIcAl es ImPlementAtIons In nZ Findings in this study reveal that the ES market is based on three different size segments -- the large enterprise segment, the medium-sized enterprise and the small firm. Most respondents used the number of employees as a measure for organizational size, however some respondents used revenue. Until recently, the focus of implementations was on the large enterprise – businesses and government agencies with more than 500 employees and revenue greater than $250M. But now the focus for new implementations has shifted to the SME sector. The higher end of medium-sized organizations in NZ employ between 100 to 299 staff and have revenue between $50M to $200M. At the lower end of this segment, employees are between 20 to 100 and revenue is between $10M to $50M. In the small organization segment in NZ, employees are less than 20 and revenue less than $10M. Findings revealed that the large enterprises in NZ could have 200 or more users in a typical implementation. SME-based implementations could be between 20 and 200 users. A classification by consultancy firm IDC, provided as part of the current study, shows the sizes of companies in terms of number of users where ES is implemented as a percentage of companies in NZ. A small organization with less than 20 users are 26% in a NZ context, a medium-sized organization with 20 – 200 users at 49%, and any organization above 200 users are large at 25% in the NZ market, as shown in Table 2. In another study by
55
Enterprise Systems in Small and Medium-Sized Enterprises
Table 2. Number of users in NZ companies where ES is implemented Size of Organization
Number of Users
Percent in NZ
Large
>200
25%
Medium
20-200
49%
Small
$10 M Large - $2 M to 10 M SME - $0.5 M to 2 M
Microsoft
Large – 18 to 24 Medium – 9 to 12 Small – 3 to 6
Very large – Multi-million $ Mid-market – $0.5 M to 2 M Small – $0.1 M to 0.5 M
Oracle
Large – 18 to 24 SME – 6 to 12
Not answered in figures
EMDA Consulting
Large – 10 to 12 SME – 3 to 6
Large > $1 M SME – $0.2 M to 1 M
PricewaterhouseCoopers
Large – 24 to 48 SME – 4 to 12
Very large – $10 M to 50 M Large – $2 M to 10 M SME – $0.5 M to 2 M
IDC
Large – 9 to 18 SME – 3 to 9
Not answered in figures
58
Enterprise Systems in Small and Medium-Sized Enterprises
its own implementation. However, companies in the SME sector are now optimizing by using one implementation at multi-locations because they are finding it too hard to manage and maintain separate implementations at all these locations. “Organizations are realizing its no use having IT administrators in all the locations doing a similar task.” The growth in export markets of NZ companies coupled with availability of Internetcapable technology is also driving multi-site ES implementations in NZ. These implementations now are single instance which as explained by SAP meant that only one installation of the software is made to run on one server but the software is used at multiple locations. This environment is different to the earlier multi-instance where multiple installations of the software were made to run across the company in one or more locations. Typical implementations today are single instance multi-site implementations. Organizations are now implementing ES into one site which is their main manufacturing or business centre and this single instance is used by all other subsidiary sites, distribution warehouses, and sales offices. An implementation partner is mostly used for managing the ES project. Findings revealed that while a third party or a consultant implementer was popular in the past for large organization implementations, SME customers now prefer the software vendor’s direct involvement. This finding again confirms the Shakir (2002) study which also noted that vendor driven implementations were on the increase. A majority of the participants in the current study suggested that there has been a shift over the last five years. Customers traditionally preferring to work with the big 5 consulting companies for implementation are now more inclined to work with the software vendor directly so that they have a one-stop shop. Customers are starting to realize that the technical skills a software vendor provides may not be possible from consultants. One vendor explained what customers feel is that unless they actually talk to the software owners, they may not get the
best value from a price perspective and from the perspective of having the best experts involved in the project. The post implementation and after sales support from the software vendor or the implementer to the customer organization normally includes three levels of support (see Figure 1). The first level is at the customers’ end where the customers’ super user (i.e. ES champion), determines whether it is a “how to” question -- where the user does not know how to use the system -- or something else. If the problem is related to the user or an organizational issue it is resolved at the first level. If not an end-user problem, then is it a general business requirement issue? If so, it is referred to the second level support which is the local vendor implementer or the implementation partner. The second level support determines whether it is a functionality or software performance setting issue that requires additional configuration to make it work and meet the business requirement. Finally, if the local implementation partner determines that the problem is a software bug or a productrelated issue it is raised to level three which is the support channel inside the software vendor. So it is a typically a three tier support model as shown in Figure 1. Customization is the process in which changes are made to the ES software during the implementation phase to suit the needs of the organization where it is being implemented. This happens when the best business practices embedded in the ES software do not satisfy the needs of the business, and the software is changed to meet the requirements of the organization (Davenport & Prusak, 1998; Kumar & Van Hillegersberg, 2000). There are two implementation strategies or models. The first is the “comprehensive customization” type when many and sometimes major changes to the software are performed during the implementation to satisfy business requirement. The second is “vanilla” or “out-of-the-box” when the ES software application is implemented without any changes to the software and the
59
Enterprise Systems in Small and Medium-Sized Enterprises
Figure 1. Three tier post implementation support model
business processes within the organization are changed to suit the functionality of the software. SAP explained that there is a potential source of confusion about the extent of customization as every project needs some form of customer specific reports, customer specific interfaces, and customer specific data conversion programs. ES software is designed to meet most customization requirements by adjusting parameter settings. All modern software vendors now have a software architecture that does not require modification to the core software statements to achieve results. The user access can be built through parameter settings to accommodate specific requirements. However, this should not be confused with what is called true customization in which the core software is actually modified. Findings in this study revealed that organizations now view the ES software not as a bunch of statements but as pre-defined business processes. These organizations prefer to adhere to the predefined business processes in the software and change their own processes to the software’s requirements. The companies doing this are more likely to be successful in capturing the benefits and controlling the cost of the implementation as this also helps in future upgrades, and the overall cost of ownership gets reduced. These findings also confirm the Brehm, Heinzl, and Markus (2001) study in which they have estimated that greater the customization, the more will the implementation
60
encounter difficulties, suffer on cost, schedule and performance metrics. The company will also experience difficulties when attempting to upgrade to a later package release. But on the other hand the organizational adaptation to the ES will be easy and the system will meet the needs of the business. The current study revealed that vanilla implementations are more common in the SME category whereas the large organizations are more likely to have comprehensive implementations. Shakir and Viehland (2004) noted cost as a driver towards vanilla implementations as the two approaches has major implications on the change management strategy. When the best practice is chosen, people issues become top priorities whereas when the implementation strategy is geared towards customization, it is more of a technical challenge. Parr and Shanks (2000) have reported in their study on different ES implementation approaches that vanilla implementations are usually single site and comprehensive multi-site. However, the current study suggests that vanilla implementations could be single or even multi-site and currently more implementations are multi-site. An implementation is considered new when it is implemented in an organization for the first time. An upgrade is when a revised version of the software with some additional functionality is implemented to upgrade the existing software in the current implementation (Dalal, Kamath, Kolarik, & Sivaraman, 2004). Add-ons, also called
Enterprise Systems in Small and Medium-Sized Enterprises
bolt-ons, include adding new modules to the existing implementation. Replace means change the existing implementation with a different vendor’s software. The Shakir (2002) study observed that while new implementations were happening in SME’s, large organizations were focusing only on upgrades and add-ons comprising of 10-15% of the total implementations. However, findings in this study suggest an equal split between new implementations vs. upgrades, add-ons, and replacements in NZ organizations. In the current study, SAP suggested a 50-50 split. “We’re definitely focusing on new implementations because that’s where our goal is. However, we have to look after our existing customer base and as their requirements change, the presentation of our software in their business may also need change.” In the case of replacements, Oracle noted that an organization will replace an ES only if there is a need to satisfy some major benefit which remains unsatisfied in their existing system, because it is expensive to replace. It is not just the cost of the software, but it is the huge organizational change that the organization has to go through to replace an enterprise system. Oracle also revealed that in the past this cost was underestimated, but “replacement cost is three times the cost of upgrade”. Oracle also revealed the maintenance aspect which included the cost of up-grading the ES. “Typically in every fiveyear period, companies spend up to four times the initial purchasing implementation cost, just to maintain the ES. That is why IT budgets in organizations allocate substantially for upgrade support as opposed to new requirements.” Another model used during ES implementation is the best-of-breed, as opposed to a single vendor implementation. The best-of-breed model includes implementation of a mix of different vendor modules which the vendor specializes in, to have the best of everything (James & Wolf, 2000; Pender, 2000). A single vendor implementation includes all the modules from a single vendor as an integrated package. Findings in this study
revealed that typically best-of-breed model was adopted by many organizations in the past. For example, an organization might install HRM module from PeopleSoft, financials from Oracle and manufacturing from SAP in the first phase. Then subsequently install bolt-on modules such as CRM from Microsoft, SCM from SAP or Oracle, or BI from Cognos in the second phase. However, both customers and vendors are now moving towards single vendor implementation. This is because the additional benefit received from a best-of-breed implementation is vastly outweighed by the cost of implementing, maintaining, and managing those disparate systems. Organizations have realized that although there are really good benefits in PeopleSoft, SAP, and Oracle for the different modules, the cost to implement and maintain is enormous. While they may only get 85% to 90% of the best-of-breed benefit in a single vendor implementation that is preferable especially since it can save three times the cost of implementation and maintenance. Vendors also do not release new versions of the software exactly at the same time, therefore managing the upgrade path becomes difficult, the investment depreciates faster than expected, and organizations are unable to take advantage of the new features of the software. This aspect differed from the findings of the Shakir (2002) study which noted that while traditional ERP implementations still dominate new implementations, upgrades, addons, and replacements appear to favor the bestof-breed model. The best-of-breed model is also a consideration for new implementations, especially for organizations that operate in niche industries. The application service provider (ASP) implementation model is one in which a service provider provides an ES application software as a service or hosting to organizations at a fixed cost for a specific period (Malcolm, 2002; Pamatatau, 2002). There was a mixed response from respondents on this model. One vendor noted, “ASP pops up every 5 years and was a bit like an economy that came and went and nothing really happened. I’m not
61
Enterprise Systems in Small and Medium-Sized Enterprises
too sure it is addressing a real market requirement need”. Another vendor responded “I don’t think this model has picked up at all”. However, yet another vendor confirmed that the ASP model is used in NZ. “We use it. Customers are happy with it. We have time and resources for providing the service. There’s a huge market there. These are small companies that do want an ES and they don’t mind paying sixty to seventy thousand dollars a year but are not able to spend half a million to one million dollars for buying the software. It is not too difficult for these companies to put up a few servers each with the latest operating system of windows. We’ve got the people and it’s not much of their time, so we can provide this service. There is no trend, but there is a huge market out there if marketed properly.” Except for this one vendor, the overall response was not very positive for the use of the ASP implementation model, either in the current ES environment or in the recent past (Shakir, 2002). Another model referred to as the business process outsourcing (BPO) or the managed service model was cited by respondents as a growing implementation model in a NZ context. In this model outsourcers run a customized managed service of ES implementation for customers where effectively a single solution is sold to a customer. One consultant explained that this is a low cost commodity solution where the customer prefers not to manage the ES. This model positions itself very much in the SME market for example, in outsourcing transaction services or specific functions like finance, or payroll, or a similar function to a third party supplier. “An organization may have implemented Oracle financials or SAP finance for example, but may be paying, say to IBM, to run the technology and specific functions for them”. Despite its risks, ES implementation is pervasive in many different types of industries (Kumar & Van Hillegersberg, 2000; Mabert, Soni, & Venkataraman, 2000). A majority of the respondents noted that ES implementations are covered in most industry sectors in NZ; however, some respondents
62
provided specific examples in highlighting trends. SAP explained that traditionally, over the last 10 years, there have been many implementations in the consumer packaging goods, manufacturing, forestry, and pulp and paper industries. However, in the last two years there has been a slight shift in the ES market in NZ with several implementations in the retail and utilities industry, and this trend is likely to continue for the next two years. ES maturity in an organization depends upon the number of year’s experience the organization has had with ES and the stage of ES implementation (Hawking, Stein, & Foster, 2004). This concept of ES maturity and the different stage of ES implementation is reinforced by the Nolan and Norton Institute (2000) classification that groups implementations into levels of maturity such as beginning where ES is implemented in the past 12 months, consolidating where ES is implemented between 1 and 3 years, and mature where ES is implemented for more than 3 years. Findings revealed that most NZ organizations are reasonably mature with ES technology and IT in general. Most large organizations and many SMEs in NZ have been using some form of ES technology for more than a decade and are at a fairly advanced level of maturity. This also confirms the Shakir (2002) study which noted that although NZ is a small country, technology is mature and on par with what’s happening in the U.S. However, as per the respondents, there are a couple of issues in managing ES projects which do highlight the slower pace of ES maturity within the NZ industry. First, many NZ organizations do not conduct a proper business justification of their implementation. Although some improvement has been seen in the last couple of years, most NZ organizations produce little or no value assessments that often lead to weak business cases and insufficient benefit models which cannot be used for benefit tracking. Plant and Willcocks (2006) in their study on critical success factors for ES implementations have also found an increased
Enterprise Systems in Small and Medium-Sized Enterprises
emphasis upon the determination of clear goals and objectives at the project outset as one of the important factors for ES implementation success. Second, many organizations in NZ believe implementation of ES is a technology challenge. However, according to most respondents, it is more about change management, people, and processes and less about technology. With better business case development, these are two areas in which many NZ companies are struggling. Respondents also revealed that typically when a new system is implemented, productivity drops for a period and then goes up again. Oracle suggested the depth of the drop depends upon how well the system is implemented, how well the change is managed, how well the business case is defined, and how well the managers are measuring and managing benefits before the organization starts seeing the benefits starting to flow through. Until a few years ago the majority of organizations did not use the ES in its true capacity. ES was used as a financial system, as a central repository for HRM records, or as a method for raising purchase orders. This was because the organizations had not thought about what they were trying to optimize, what benefits they were trying to bring into the organization, what they were trying to change, how they were trying to manage the business, and whether they could actually get the information to manage the business. However, recently the software vendors have started to see several companies trying to find ways to get more value out of their investment. Companies have recently started asking how to establish analytical processes to optimize and realize business value from their ES investment. Many NZ organizations have already completed their first phase of ES needs and are now extending into the second phase with CRM, SCM, or BI. Most respondents agreed that the slower pace of ES maturity within the NZ organizations is due to the limited spending power, which is attributable to the comparatively small NZ economy. However, this trend is now changing. NZ organizations have
now started realizing the value of technology and its use to stay ahead of competition. Findings revealed that the mix between national and international ES implementations is a 50-50 split in NZ. Respondents noted that on several occasions the implementations started as a national implementation but quite quickly reached out to countries like Fiji, Australia, Europe, Singapore or wherever the sales and distribution offices are located. Although many NZ companies are based in NZ and the reach is national, there is a growing trend in NZ organizations to expand to global markets therefore now the reach is becoming global. This is also supported by the growing export-oriented market of NZ organizations. Global implementations are also part of multinational organizations that implement ES within their NZ companies. These implementations are normally “roll-outs” based on a global template that includes standard business processes. The “roll-out”, as explained by one consultant, is an implementation generated from a template customized for an overseas location. The rollout starts with a massive data set prepared by the first implementation followed by the addition of country specific and localized data. For example, GST or VAT percentages are different, the states as part of the addresses are different, and therefore a couple of master files which are country specific are implemented on top of the local customer and vendor base that is created. The data-set roll-out is established using country specific data where new country settings override the template settings. A separate dedicated warehouse for these locations is also included for tracking transactions. However, many NZ companies are governed by their parent organizations; hence all the decision making for the ES implementation is done offshore by the parent company, without much control from the NZ businesses. This is nothing new. Implementations based on global templates and critical decision-making being made offshore were observed in Shakir’s 2002 study. In summary, ES implementations in NZ
63
Enterprise Systems in Small and Medium-Sized Enterprises
organizations are moving from national towards a global reach either by expansion into overseas markets or off-shore ownership.
conclusIon And FurtHer reseArcH The main objective of this study was to understand typical ES implementations and practices in NZ and how current implementations in the SME sector differ from the earlier implementations in the large enterprise sector. The findings are analyzed and summarized in table 4, based on different organizational size segments of the ES market. Table 4 suggests the different ES implementation determinants for both large and SME organizations and explains the relationships between the organization size and the implementation process variables. It is evident from the table that typical cases of ES implementation in NZ exist in both large
and SME organization segments. Typical implementations in the large organization segment with revenues more than $250M are currently in phase 2 and the organizations are fairly mature with their ES. These organizations are likely to be in the phase of acquiring collaborative scenarios like SCM or CRM, or management services applications such as BI. These can be single or multi-site implementations. The number of users is estimated to be 200 or above and the cost of the project is likely to be more than $2M. In the SME segment, typical implementations are in organizations with revenue between $50 - 200M. These implementations are likely to be new with two or more core ES modules. These can be single or multi-site implementations with number of users in the range of 20 – 200 and the cost of the implementation between $100,000 – 1M. Many ES implementations in New Zealand are several years old now however, these companies have only recently started asking how to actually optimize processes and realize business
Table 4. Determinants for describing typical ES implementation based on organizational size segment Organizational Characteristics
Organization Size Revenue in Million ($NZ)
Large 250M and Over
SME Small 10-50 M SME 50-250 M
ES implementation process variables
Phases of ES implementation
Phase 2
Phase 1
Modules
Supplementary modules (HR, SCM, CRM, data warehousing and BI)
Core modules (Finance, manufacturing, distribution)
Time for implementation
12 to 24 months
3 to 12 months
Locations
Single or multi-site
Multi-site
Cost of implementation
Above $NZ 2 M
$NZ 100,000 – $NZ 1 M $NZ 1 M – $NZ 2 M
Number of users
Above 200
Below 20 20 – 200
Implementation partners - Vendor vs. Third party
Third party
Vendor
Customization - Vanilla vs. Comprehensive
Comprehensive
Vanilla
Implementation - New vs. Upgrades/add-ons/replace
Upgrades/add-ons/replace
New
Implementation - Single vendor vs. Best-of-breed
Best-of-breed
Single vendor
ES implementation models
64
Enterprise Systems in Small and Medium-Sized Enterprises
value from their ES investments. Organizations are establishing analytical processes for tracking benefits continuously improving in taking advantage of the technology. The findings of this study are limited to the views of professionals from different ES vendors, ES consultants, and IT research organizations and are limited by a small sample size. There may also have been some influence on the responses by the commercial interests of the firm the participant worked for. However, the study has achieved its objectives. This is achieved due to the seniority and experience of the respondents within the organizations interviewed, and the position of these organizations as key players in the ES industry in New Zealand. Further research is in progress to analyze the current practices and the critical effectiveness constructs of ES in New Zealand from the practitioners’ perspectives identified by this study.
reFerences Allen, D., Kern, T., & Havenhand, M. (2002). ERP Critical Success Factors: An Exploration of the Contextual Factors in Public Sector Institutions. Paper presented at the Proceedings of the 35th Annual Hawaii International Conference on System Sciences, 9/02, Hawaii. Bancroft, N. H., Sep, H., & Sprengel, A. (1998). Implementing SAP R/3 (2nd Edition ed.). Greenwich, USA: Manning Publications. Brehm, L., Heinzl, A., & Markus, M. L. (2001). Tailoring ERP systems: A spectrum of choices and their implications. Paper presented at the 34th Hawaii International Conference on System Sciences, Hawaii.
Dalal, N. P., Kamath, M., Kolarik, W. J., & Sivaraman, E. (2004). Toward an integrated approach for modeling Enterprise processes. Communications of the ACM, 47, 83–87. doi:10.1145/971617.971620 Davenport, T. H. (2000). Transforming the Practice of Management with Enterprise Systems. In Mission Critical (p. 203-235). Boston, MA: Harvard Business School Press. Davenport, T. H., & Harris, J. G. (2005). Automated Decision Making Comes of Age. MIT Sloan Management Review, Summer 2005 46(4), 83-89. Davenport, T. H., Harris, J. G., & Cantrell, S. (2002). The Return of Enterprise Systems: The Director’s Cut. Accenture Institute for Strategic Change. Davenport, T. H., & Prusak, L. (1998). Working Knowledge. Boston, Harvard Business School Press. Hawking, P., Stein, A., & Foster, S. (2004). Revisiting ERP systems: Benefit Realisation. Paper presented at the Proceedings of the 37th Hawaii International Conference on System Sciences, Hawaii. Hedman, J., & Borell, A. (2002). The impact of Enterprise Resource Planning Systems on Organizational Effectiveness: An Artifact Evaluation. In F. F.-H. Nah (Ed.), Enterprise Resource Planning Solutions & Management (p. 125-142). Hershey, London: IRM Press. Holland, C., & Light, B. (1999). A critical success factors model for ERP implementation. IEEE Software, (May/June): 30–36. doi:10.1109/52.765784 James, D., & Wolf, M. L. (2000). A Second Wind for ERP. McKinsey Quarterly, Issue 2, 100-107. Kumar, K., & Van Hillegersberg, J. (2000). ERP Experiences and Evolution. Communications of the ACM, 43(4), 23–26. doi:10.1145/332051.332063
65
Enterprise Systems in Small and Medium-Sized Enterprises
Mabert, A. M., Soni, A., & Venkataraman, M. A. (2000). Enterprise Resource Planning Survey of US Manufacturing Firms. Production and Inventory Management Journal, 41(2), 52–58. Malcolm, A. (2002). Fonterra Rents its Accounting Application. Computerworld IDG Communication Ltd., 11 July, 2002, Web page: http://www. idg.net.nz/webhome.nsf/UNID/8433B6BCB6B E15FECC256BF1007BF560 Markus, M., & Tanis, C. (2000). The Enterprise Systems Experience - From Adoption to Success. In R. W. Zmud (Ed.), In Framing the Domains of IT Research Glimpsing the Future Through the Past (p. 173-207). Cincinnati: Pinnaflex Educational Resources, Cincinnati, USA. Nolan and Norton Institute. (2000). SAP Benchmarking Report 2000. Melbourne. Pamatatau, R. (2002, June). The Warehouse outsources Oracle Management. NZ Infotech Weekly, 24, 3. Parr, A., & Shanks, G. (2000). A Model of ERP Project Management. Journal of Information Technology, 15(4). doi:10.1080/02683960010009051 Pender, L. (2000). Damned If You Do: Will Integration Tools Patch the Holes Left By An Unsatisfactory ERP Implementation? CIO Magazine, September 15, 2000. Retrieved from http://www. cio.com/archive/091500_erp.html Plant, R., & Willcocks, L. (2006). Critical Success Factors in International ERP Implementations: A Case Research Approach. Working Paper Series - 145, London: Department of Information Systems, London School of Economics and Political Science. Sarker, S., & Lee, A. S. (2000, 13 November 2000). Using a case study to test the role of three key social enabales in ERP implementation. Paper presented at the ICIS 2000, 13 November 2000 http://www. commerce.uq.edu.au/icis/ICIS2000.html
66
Scott, J. E., & Vessey, I. (2002). Managing Risks in Enterprise Systems Implementations. Communications of ACM, April 2002, 45(4). Shakir, M. (2002). Current Issues of ERP Implementations in New Zealand. [Massey University, Auckland, New Zealand.]. Research Letters in Information and Mathematical Science, 4(1), 151–172. Shakir, M., & Viehland, D. (2004). Business Drivers in Contemporary Enterprise System Implementations. Proceedings of the Tenth Americas Conference on Information Systems, New York, 103-112. Shanks, G., Seddon, P. B., & Wilcocks, L. P. (2003). Second-Wave Enterprise Resource Planning Systems: Implementing for Effectiveness. Cambridge University Press. Skok, W., & Legge, M. (2001). Evaluating Enterprise Resource Planning (ERP) Systems Using an Interpretive Approach. Paper presented at the Proceedings of The 2001 ACM SIGCPR Conference on Computer Personnel Research, April, p. 189-197. Sumner, M. (1999). Critical Success Factors in Enterprisewide Information Management Systems Projects. Paper presented at the 5th America’s Conference on Information Systems, Milwaukee, Wisconsin, USA. Viehland, D., & Shakir, M. (2005). Making Sense of Enterprise Systems Implementation. Business Review . University of Auckland, 7(2), 28–36. Yang, S., & Seddon, P. B. (2004). Benefits and Key Success Factors from Enterprise Systems Implementations: Lessons from Sapphire 2003. Paper presented at the 35th Australasian Conference in Information Systems. Hobart, Australia. Zrimsek, B. (2002). ERPII: The Boxed Set. Retrieved Mar. 4, 2002, from www3.gartner.com/ pages/story.php.id.2376.s.8.jsp
Enterprise Systems in Small and Medium-Sized Enterprises
key terms Application Service Provider (ASP): An ASP is a business that provides computer-based services of specialized software to customers over a network. Business Intelligence (BI): Software tools that use the ERP database to generate customizable reports and provide meaningful information and analysis to employees, customers, suppliers, and partners for more effective decision making at the organization. Business Process Outsourcing (BPO): Contracting of specific business task (s), such as payroll, marketing, billing etc, to a third party service provider as a cost saving measure Enterprise Resource Planning (ERP): Software systems for business management that integrates functional areas such as planning, manufacturing, sales, marketing, distribution, accounting, finances, human resource management, project management, inventory management, service and maintenance, transportation, and e-business. Customer Relationship Management (CRM): Software systems that help companies to acquire knowledge about customers and deploy strategic information systems to optimize revenue, profitability and customer satisfaction. Critical Success Factors (CSF): Are the factors which are critical for an organisation or a project and which must go right to achieve the defined mission.
Customization: Altering a system’s software code to include functionality specifically wanted by an organization, although not originally included in the package itself. Extended ERP: Extends the foundation ERP system’s functionalities such as finances, distribution, manufacturing, human resources, and payroll to customer relationship management, supply chain management, sales-force automation, and Internet-enabled integrated e-commerce and e-business Small and Medium-Size Enterprise (SME): A business enterprise independently owned by contributing most of the operating capital and managed by the owners or managers, having fewer than 250 employees and a small-to-medium market share. This number differs in different regions or countries (in some countries it is less than 500 while in others the number may be less than 100). The number may also vary depending on the type of business. Supply Chain Management (SCM): Software systems for procurement of materials, transformation of the materials into products, and distribution of products to customers, allowing the enterprise to anticipate demand and deliver the right product to the right place at the right time at the lowest possible cost to satisfy its customers.
This work was previously published in Handbook of Research on Enterprise Systems, edited by Jatinder N. D. Gupta, Sushil Sharma and Mohammad A. Rashid, pp. 170-184, copyright 2009 by Information Science Reference (an imprint of IGI Global).
67
68
Chapter 1.5
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies in Multinational Organizations Kai Kelzenberg RWTH Aachen University, Germany Thomas Wagner RWTH Aachen University, Germany Kai Reimers RWTH Aachen University, Germany
AbstrAct
IntroductIon
The chapter develops generic strategies for the specification and implementation of an enterprise resource planning (ERP) system in a multinational company. After the presentation of a framework for categorizing companies by their global business orientation, ERP strategies corresponding to each category are derived. Subsequently, various implementation strategies are developed for each type of ERP strategy; they provide decision makers with a high degree of freedom in specifying an implementation strategy in accordance with a company’s strategic goals. The results are summarized in a phase model; the overall approach is illustrated by two polar cases.
Market demands are becoming more and more dynamic, forcing organizations to be flexible in order to satisfy the needs of their customers (Mabert, Soni, & Venkataramanan, 2001). At the same time, organizations face an ever increasing competition through globalization. As a result of both phenomena, business organizations tend to act in networks of tightly or loosely coupled productive units. Bartlett and Ghoshal (1998) identify four different business orientations which can be used to describe the structures of multinational companies (MNCs). Starting from these four business orientations, this chapter presents a conceptual framework for deriving enterprise resource planning (ERP) implementation strategies
DOI: 10.4018/978-1-59904-531-3.ch003
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
in multinational organizations. This is motivated by the findings of prior research that shows the importance of aligning the IT strategy with a firm’s business strategy (Ward, Griffiths, & Whithmore, 1990; Earl, 1993). However, organizational contingencies are seldom considered in the literature on ERP implementations which focuses on critical success factors (CSFs) (Holland & Light, 1999; Akkermanns & van Helden, 2003) or critical issues and risk factors in general (Bingi, Sharma, & Godla, 1999; Sumner, 2000; Scott, 1999; Hong & Kim, 2002; Gosh & Gosh, 2003). The framework of Bartlett and Ghoshal has previously been applied in the IS field. Reimers (1997) shows how IT can be managed in the transnational organization, and Madapusi and D´Souza (2005) have used the framework to develop recommendations regarding the way ERP systems should be configured in multinational companies. While these authors also discuss the issue of appropriate implementation strategies, this discussion focuses on the issue of a ‘big bang’ vs. a phased implementation approach, which we deem too narrow. Rather, we propose that the configuration of an ERP system should follow an appropriate ERP implementation strategy which comprises many more issues than that of a big bang vs. a phased implementation approach. In this chapter, we offer a framework which helps to conceptually organize the issues that should be considered in deriving an ERP implementation strategy for multinational companies and which also helps to fine-tune the implementation strategy as the implementation process unfolds. The remainder of this chapter is organized as follows: First we will give a short review of different views on ERP before we derive ERP strategies from an organization’s business orientation. Subsequently we will discuss different ERP implementation strategies. Afterwards, our framework is presented and illustrated by two examples. The chapter ends with a discussion.
bAckGround The implementation of an enterprise resource planning system in a company can have different degrees of complexity which will be conditioned by the following items (this list is limited by the scope of the chapter; several more perspectives could be added in future work): 1. 2. 3.
ERP definition ERP strategy Implementation strategy
Referring to O’Leary (2000), ERP systems “provide firms with transaction processing models that are integrated with other activities of the firm” (p. 7). Moreover, they can reduce information asymmetries and help to create one view on all relevant data which can be shared across the whole organization. This concept is based on a single database that contains all data of several functional and/or local areas. Bancroft, Sprengel, and Seip (1996) offer a similar definition of ERP systems focusing on SAP/R3. For them, an ERP system consists of “one database for the entire corporation without any data redundancy and with a clear definition of each [data] field” (p. 17). Firestone (2002) adds another perspective on ERP as he mentions that customers want ERP for decision-making support, although there are other (software) systems that are more specialized in this area. Markus and Tanis (2000) add the opinion of some ERP vendors who state that their software “met all the information-processing needs of the companies that adopted them” (p. 174). This includes an automatic data transfer facility between several functions within the system as well as a shared database for all applications. These conditions could be satisfied fairly easily if one was dealing with a single-site company, but the more interesting question is what happens if a company is composed of different sites with different ranges of functionality, for example, a
69
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
large producer with several distribution centers around the world or a federation of several producing and distributing companies that form one major company. Is it possible, referring to the above mentioned authors, to call the installed system across all members of such a unit one big ERP system? Following such a scenario, what would have to be done to standardize the data of each business unit in such an organization and which benefits would result from this effort? How far should standardization go and which areas could be standardized in a useful manner? Moreover, who should be in charge of choosing and designing an ERP system fitting to a company’s requirements and of keeping the system up and running? Which organizational units have to be established to lead this venture to success? Many of these questions could not be answered in a general way but they should help to consider the different aspects of the design process of such a project. The presented framework tries to support the decision makers with a structured approach facing this task. If the ERP system has been designed in accordance with the company’s requirements, an IT strategy focusing on implementation has to be developed. Karimi and Konsynski (1991) state that an IT strategy has to be derived from the business strategy. Similarly, an ERP strategy has to be developed which shows the long-term goals to be reached by the use of an ERP system. Clemmons and Simon (2001) state that a misalignment between business and ERP strategy often causes ERP implementation delays and failures. (As will be shown later, there is a need for a centralized authority which is in charge of deciding whether a global ERP strategy is to be developed, and, if so, how to enforce it within the whole company.) In the subsequent sections the work of Bartlett and Ghoshal (1998) helps to identify four different types of business orientations for companies. These are generic views which—in reality—will be mixed in a number of ways, but for developing
70
a general ERP strategy they will provide important insights regarding the questions of when one could speak of a global ERP strategy and how important it is to have one organizational entity for the whole company that can develop and promote the ERP strategy for all sites. In this chapter, we define an ERP strategy as containing the range of system components that have to be installed in each site, as well as the interfaces and data formats in which data transfer should be done. The chapter will go on to show different generic ways how to realize an ERP system according to the global ERP strategy (ERP implementation strategy). In this context, the problems of tailoring an ERP system fitting the company’s requirements as described by Lehm, Heinzl, and Markus (2001) will be outlined. Since not all strategies require different implementation strategies, some can be discussed jointly. This chapter focuses mainly on the first phase(s) of an ERP life-cycle (cf. Esteves & Pastor, 1999, or Markus, Axline, & Tanis, 2000). Given its academic and practical attention, the implementation phase seems to be a crucial one that motivates our focus. The consideration of all phases is beyond the scope of this chapter; for a comprehensive literature review of all phases, we refer to Esteves and Pastor (2001).
busIness orIentAtIon In 1998, Bartlett and Ghoshal published a book in which they describe a couple of companies and how their way of doing business differs. Based on an analysis of these differences, the authors conclude why one firm is more successful with its strategy than another. They develop a framework consisting of four types of multinational companies. Each type is characterized by a distinct strategy and business orientation that deals with the allocation and interconnection of the company’s resources. The term “resource” is not described in detail but refers to skills, personnel,
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
money, knowledge, and so on. Applying these insights to the topic of this chapter, ERP systems should optimize the use of such resources. The different business orientations will be explained in the subsequent sections and provide the basis for deriving appropriate global ERP strategies which are discussed in the same sections. The designer of such a system must take into account that there might be not only positive influences on the company’s learning capabilities, but that side effects could cause changes in organizational values and norms (Butler & Pyke, 2003). Thus, ERP implementation goes beyond an IT project: it needs the support of top management to enable the organizational rebuilding processes.
Global orientation This business orientation is composed of a strong center that makes all important decisions for the whole company. The decisions made will be communicated to the single sites and have to be implemented without any adjustment to local requirements. Through this process the whole company looks—from the customers’ point of view—alike all over the globe. One implication of this orientation is that the whole world is seen as one market which will be supplied with identical products. To ensure the loyalty of each site, the sites’ general managers are handpicked by headquarters (HQ) to control all activities and to implement the envisioned processes. New processes have to be developed just once and can be implemented worldwide in a short time accordingly. The global orientation is a hub-and-spoke kind of network with strong ties of the spokes to the hub, which is represented by HQ. In this setting, HQ will make the decision to implement an ERP system and what kind of software will be used. Since the idea is that all processes should be equal in all sites, the system has to be developed just once. It can thus easily meet all the above mentioned requirements of an ERP system such as one shared database, one
system, and the interconnection of all functional areas. In this context it is negligible how many functions one site will implement. That means, one site could just implement one module for sales and distribution, while another needs production and purchasing too. The integration of the functions does not differ in both cases. On the other hand, HQ has to consider all local requirements regarding the single sites such as local laws, local accounting rules, and possibly environmental stipulations (Krumbholz, Galliers, Coulianos, & Maiden, 2000). As a consequence, the ‘how’ seems to be less complex than the ‘what’. Arguably, the planning process for such a system takes longer than in other company constellations while the implementation process could be shortened. From a strategic point of view, the MNC with a global orientation could easily add new plants to its network because it has a predefined and tested system that just has to be implemented. Besides, the MNC could realize learning curve effects from previous implementations which should ensure the success of an implementation process at a single site. Thus, the ERP strategy implies that all sites, including HQ, get one system. In accordance with this ERP strategy, a single implementation strategy can be developed including estimated project duration, costs, and scope for the single sites (the term scope is used to mean the range of modules that should be implemented in one site). HQ determines the design of the system and the rollout sequence for all sites, and defines the pilot site where the system should be introduced and tested for the first time.
International orientation In this orientation HQ partly delegates control to the individual sites. Local adjustments are possible but the main decisions regarding product policies and strategies are made by HQ. Unlike the global orientation, the sites have the opportunity to reject HQ’s decisions which strengthens their position.
71
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
As a consequence, core competencies remain in the HQ while individual sites could develop important business competencies too. The ERP strategy for this business orientation is similar to that of the global orientation. The major activities are initiated and coordinated by HQ. The whole company gets one system that—in contrast to the global orientation—is open to adjustments to local requirements. For this purpose the guidelines of the whole implementation have to be softened. The core system is developed by HQ but can be reconfigured by local sites. The system has one shared database which helps to collect data centrally and supports decisionmaking processes at the highest management level. Rollout sequence must be harmonized with regard to the interests of the individual sites, which also includes determination of a pilot site. Due to the possibility of local adjustments, the complexity of the implementation at a single site as well as of the overall process will increase. Therefore, neither the ‘what’ nor the ‘how’ seem to be that simple. If we follow the idea of Huber, Alt, and Österle (2000), the use of templates could simplify the implementation process and allow the creation of a standardized way to adjust the system locally.
multinational orientation In this orientation the single subsidiaries are loosely coupled to the center. Following the explanations of Bartlett and Ghoshal (1998), every site is doing business under its own responsibility. The main strategic focus is differentiation and adjustment to local requirements. Because of the absent link between several company sites, each site creates its own data and knowledge. Regarding the definition of a global ERP strategy given above, the question whether a company has a global ERP strategy or not will have to be assessed based on the power and position of HQ. One possibility is that HQ has no influence over the individual sites. This can mean no influence over any guidelines at all, especially regarding the
72
IT landscape, implying that the above definition of an overall ERP strategy is not applicable in such a context. There will be no shared database, neither guidelines for one system nor for the range of modules that should be established. Regarding the whole company, there is no global resource planning at all. Another case occurs if HQ decides to implement several modules in order to manage its business and satisfy its data needs. Following that idea, interfaces, including data formats and forms, between the individual sites and HQ have to be defined. However, this scenario will still not meet the requirements of the definition of a global ERP strategy because it focuses on HQ only and not on the entire organization. A third scenario emerges when HQ can develop an ERP template and also has the power to implement it in the several sites. This scenario causes other problems regarding Bartlett and Ghoshal’s definition of a multinational orientation. Moreover, there will be no need for one shared database or integration of all functional areas because every site manages its own operations. The production line, customers, and suppliers differ from each other. A single database has no additional benefit for the organization. The only part of an ERP system that could unite the several sites is finance, which has to be done in some way by HQ as it acts as a kind of portfolio manager. Apparently there will be no fit between a multinational orientation and a global ERP strategy if there are no exceptions allowed regarding our definition. At that point, the decision makers in this kind of MNC have to think about the need for an ERP system. Who will benefit from a shared database? The answer to that question is fairly easy: no one, because each site acts independently and reacts to different countries’ markets and customers, and deals with different products. Possibly, there is no need for some subsidiaries to implement a fully integrated software package because of their small size. A globally defined ERP system will not meet the local requirements
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
if there are no major adjustments to the system. Thus, the ERP strategy could consist of just a minimum—that is, the data connection between the individual sites and HQ in the second scenario described above. Another ERP strategy could be that HQ is in charge of supporting implementation of ERP systems in each site that should connect with HQ. Sites will retain their independence because only the use of any system is dictated, but neither the system vendor nor the way it will be implemented is centrally prescribed. According to Markus, Tanis, & van Fenema (2000), this strategy could lead to a complete disaster for individual sites because there are no learning curve effects, while the possibility of repetition of errors that can be made within the ERP implementation is real. The implementation strategy can be simple because only data formats and forms have to be defined which are used for data exchange with HQ.
transnational orientation The transnational orientation describes an integrated network of business units. No site nor center has overall control of decisions and strategies. Moreover, each site’s general manager has the opportunity to cooperate on each level or in each functional area with other sites. The initiative for such actions can come from the needs of the firm at that moment. Bartlett and Ghoshal (1998) write about cooperation in research areas and in developing common business processes in order to illustrate the principle of the transnational orientation. Moreover, they identify different roles which a single site can play. A role depends on the site’s position along two dimensions: (1) how important is the site’s national or local environment to the firm’s strategy, and (2) what competencies does the site have? If both dimensions indicate high levels, then the site will act as a strategic leader in the network of business units. Otherwise, for example when a site scores low on each dimension, the site will be a so-called implementer.
Sites that have few local resources or capabilities (competencies) but operate in an important local market (environment) are labeled black holes. In contrast, a site that has strong competencies but operates in a rather unimportant market is called contributor (Bartlett & Ghoshal, 1998). Because of the specific structure of a transnational organization, it is unclear who can start the initiative for an ERP strategy. People work together on a voluntary basis and the subsidiaries can implement and use business practices as they want. Regarding the roles described above, it may be argued that the strategic leader has the competencies to develop an ERP strategy and might convince other firms in the organization to take part in this venture. As soon as some sites implement a jointly developed system, a kind of domino effect could start so that ultimately every site in the network will join in the ERP project. Obviously, sites labeled black holes should have a major interest in advancing their abilities. A common and integrated database across all subsidiaries—that is the accumulated knowledge of individual sites stored in a shared database—will be a source for weaker companies to learn in an easy and inexpensive way from the experience of other network participants. Thus, the ERP strategy includes the freedom of module choice and implementation approach in the single sites of the network. The initiative could be started by a strategic leader with core competencies in one area, while other design processes could be delegated to other sites which have competencies in a special area. However, all activities have to be coordinated within this network which is a difficult task because there is, by definition, no HQ or center within this business orientation. The strategy must contain the possibility to use individual parts of the ERP system in any way and in any combination at each site.
73
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
erP ImPlementAtIon strAteGIes An ERP implementation strategy should consist of three major parts: implementation orientation, scope of standardization, and implementation procedure.
Implementation orientation Referring to its business orientation, an organization may take one of the following three implementation orientations to plan the implementation of a global ERP system in its several sites: global rollout, local implementation, or mixed-mode implementation.
Global Rollout Global rollout means the creation of a kind of global template which will be implemented in each site without any customization. This template could also be the vendor’s off-the-shelf solution. The main attribute is the unchangeable configuration of the system. To ensure the success of this approach, it is necessary to collect the essential data on unique business processes in the organization. This information is the basis for the general adjustment of the kernel of the ERP system, which means a single customization of the system once before the first global rollout starts. One might argue that this single customization is not necessary if an organization decides to change its processes according to pre-built processes in the chosen ERP system. However, we assume that, in reality, no organization will follow this kind of implementation approach, due to a wide range of problems that can occur in such an approach (Light, 2005). We propose that organizations tend to customize their system in order to increase the business performance (Holland, Light, & Gibson, 1998). In accordance with the requirements mentioned above, all sites will be assimilated and will lose any kind of procedural differentiation.
74
On the other hand, data collection must be done just once. After a pilot implementation and the elimination of teething troubles, the implementation process will be quickly finished. The idea of the presented global rollout is closely connected to centralization within the whole IT landscape. The global rollout orientation seems to be most appropriate in an MNC with a global orientation, strong top management support which formulates clear goals and which supports sustained change management (Brown & Vessey, 2000).
Local Implementation Local implementation is the direct opposite to global rollout orientation. Every site is enabled to customize any kind of ERP system to the company’s demands. Therefore, the idea of decentralization is realized within this approach. It is assumed that some of the restrictions mentioned with regard to the global rollout have to be observed by local implementations as well because a decentralized approach, as noted in the descriptions of the multinational orientation, must have a shared base which affords management of the portfolio of sites and defines the existing organization. But the common basis could be very small and may just concern specific interfaces or number calculation metrics (Light, 1999). The responsibility for the right choice, configuration, administration, and maintenance remains in each site. If there is any grouping of site activities, the appropriate initiative comes from the sites themselves. This implementation orientation is most appropriate in MNCs with a multinational orientation; neither of the three critical success factors mentioned above (top management support, clear goals, and sustained change management) is required on a global scale. Both implementation orientations represent the extreme points on a continuum, and in reality there will be different kinds of mixtures. The combination of these two implementation orientations might be an adequate approach for
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
transnational organizations or international business orientations.
Mixed-Mode Implementation Mixed-mode implementation dilutes the global rollout and local implementation orientations. On the one hand, data must be harmonized to generate global economies of scale. On the other hand, there will be a high degree of freedom to choose an ERP system according to local demands which fits the individual site. These two aspects result most likely in standardization of interfaces among sites, while hosting and administration of the systems remains the responsibility of the individual sites. Assuming the transnational organization operates in a single industry, there might be similar processes and data which could be joined within a kind of template. By generating best-practice solutions, the single sites could upgrade their position within the organization and their market. The template has to grow and evolve over the whole process and so will the degree of complexity. This includes the monitoring of new processes added to the template which might have side effects or offers new opportunities to already finished ERP implementations. This could mean the need to conduct a further project to restructure the site or its ERP system to participate in new system settings. Thereby, boundaries of a single ERP blur, implying that the allocation of costs regarding a single ERP implementation becomes more difficult for finance and cost accounting. The mixed-mode orientation would also fit a holding company that has to cope with a network of different sites or groups of sites which constitute one or more of the above mentioned business directions.
scope of standardization Davenport (1998) advised that an organization must analyze its business to find out about its unique business processes. Not until this step
is done should the standardization process start. Otherwise existing business advantages could disappear because of a misalignment between standardized processes and enterprise requirements. Standardization could aim at the system/processes, the data/data formats, and/or the organizational structure of sites. The degree of standardization depends on the combination of the three items mentioned and their characteristics. If the ERP software is specified, this also includes business processes according to the vendor’s experiences with other ERP implementations. Using the same system thus means to introduce similar processes which could be customized by the purchaser. A major advantage of this approach lies in the pre-specified interfaces which interconnect the individual system modules. Data exchange should thus be easily established. On the other side, there might not be an ideal alignment between system and site in the case of heterogeneous business processes among the different sites (Mabert et al., 2001). The tailoring of an ERP system and its consequences for the implementation should be considered within the whole project plan—that is, right at the outset of the system design process. There are standardized process manuals developed by IEEE that can be used for this process (Fitzgerald, Russo, & O’Kane, 2003). Data and data format standardization imply that there will be interface specifications that enable data flow between different systems. Besides, the prescribed data formats guarantee an integrated view on all sites on a common basis. As mentioned before, that might be limited to data in finance and accounting departments. If the system/processes and/or the data/data formats are standardized, some sites in an organization have to change their processes according to these specifications. Therefore, departments, mode of operations, and hierarchical power may have to be changed too. This has to be taken into consideration regarding the expected implementation duration.
75
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
Which type of standardization is most appropriate depends on the overall implementation orientation; for example, a global rollout implementation orientation will aim at standardization of site organizational structures while a local implementation orientation will focus on data standards, preferably in the financial area. Finally, deriving an ERP strategy from the business strategy must consider costs that arise during the entire process. There will be a tradeoff between the degree of system standardization which additionally includes costs for administration, maintenance, and (user) support and the created benefits of using it. For example, Light (1999) mentioned that a shared database—which is stated as a major advantage of an ERP—does not need to be installed to meet the widespread market demands of an MNC. It therefore seems to be important to invest more resources and time into defining the standardization scope to find the right fit between the degree of standardization and the costs of implementation.
Implementation Procedure In the literature two procedures for ERP implementations are mentioned: these are big bang and phased implementation (cf. Umble, Haft, & Umble, 2003). In a big bang approach, every site introduces the system at the same time or one site implements all modules at the same time. In the first case, the risk of failure might be at the maximum. If there is some sort of template and if this is insufficient for the individual sites, a successful implementation ends up in a disaster for the whole organization. If there is no template, every site faces the problem of customizing its system according to the organization’s specifications. There might be redundant system configurations without any learning effects from previous implementations. Depending on project management and site capabilities, the project could be successful or not.
76
A phased implementation can be site-by-site, department-by-department, or module-by-module. Site-by-site implementation is similar to the latter case of the big bang approach but will be discussed here. All these cases enable an organization to test single configurations of the ERP system before implementing them. Meanwhile, the old legacy system could handle market demands in its usual way. On the other hand, this procedure extends the duration of the whole implementation process which may increase costs and slow down the speed with which increased efficiency can be realized. The phased implementation approach offers the possibility to arrange a sequence for the ERP implementations which can be adjusted according to project experiences, urgency, and strategic importance of the sites. It must be pointed out, however, that HQ of an organization may not be free to arrange the sequence of implementations without obtaining support and consent of the sites affected. In multinational and transnational orientations, the individual sites have high degrees of autonomy which might put them in a position to resist the organizational change process. Neither a big bang nor phased implementation procedure can realize learning effects if there is not a sequential implementation procedure of any kind. So the company has to install devices to collect knowledge from finished implementations to use it by project management in follow-up projects. This knowledge, for example, contains not only difficulties and solutions with system adjustments but also how the company reacted to the new system, and how people have to be trained on new functionalities or issues regarding the local legal requirements. Another very important dimension of the implementation procedure is business process redesign (BPR). Depending on the decisions of which kind of implementation orientation and scope of standardization is chosen by the company, changes in existing business processes or the system processes are implied. For example, the
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
implementation with a global template in a globally oriented organization demands the reconfiguration of business processes in every site in accordance with the template. For the sake of completeness, this topic is mentioned here briefly, but due to its complexity and the implied problems, for example resistance to change or maintenance of the system, the topic is beyond the chapter’s scope. For a deeper insight into BPR and ERP, we refer to the literature (Ng, Ip, & Lee, 1999; Koch, 2001; Scheer & Habermann, 2000; Estevez et al., 2002; Gunson & de Blais, 2002; Luo & Strong, 2004).
cHAnGInG tHe busIness orIentAtIon If we follow the framework of business orientations described above, companies’ ERP decision makers have to bear another aspect in mind: it could be an organization’s strategy to use the implementation of an ERP system in order to change the business orientation of the organization, thereby exploiting the ERP implementation process as a tool. Whether or not this use is feasible (see Future Trends and Research Directions), it complicates the task for ERP planners. New authorization procedures may have to be created and new modules may have to be established within the different sites which may result in yet another implementation. While three of the four orientations have similar implications regarding the implementation procedure, the change towards or away from a multinational orientation seems to be a special case. The reason for this is the centralized vs. decentralized structure of data management. In the case of creating a multinational orientation, the individual sites must be enabled to do business on their own. To do so, they will need an ERP system with all functions, and the knowledge of how to handle such a system to configure, maintain, and expand the system on their own (e.g., Luo & Strong, 2004). On the other hand, if an organization is to be more integrated, for example by moving from a
multinational orientation to an international one, all the separate data will have to be cleaned up and consolidated in one database. This means a long-lasting process of consensus building with regard to appearance, form, and quantity of data (cf. Oliver & Romm, 2002). Moreover, both steps presuppose a solution to the above mentioned problem regarding the need for a central authority that defines and enforces the global ERP strategy, because the possibility of initiating a change of business orientation implies a sufficiently powerful center in the organization. A change in business orientation could be necessary to stay competitive or if HQ wants to change its position itself. In the case where HQ wants to act more like a portfolio manager, a company must be reorganized if its form does not fit this intention. This could mean more decentralization of power and decoupling of sites. Because of this, the ERP implementation process could become more complicated since decision-making power and the commitment of top management is delegated one step down in the organizational hierarchy. For example, if in the original business orientation all decisions are made by HQ, they have access to all global resources of the organization. In case that one division or business unit (site) general manager is given the responsibility for an ERP implementation process for parts of the whole company, access to these resources is hampered or impossible for HQ. This will complicate the implementation of an ERP system in part of the company.
A model For monItorInG tHe erP ImPlementAtIon Process The discussion so far is summarized in Figure 1. The business orientation leads to an ERP strategy based on a centralized or decentralized approach in general. Adding the required management conduct, which is a precondition for a successful realization of an ERP strategy, results in the constraints for an ERP implementation strat-
77
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
Figure 1. Steps in an ERP implementation strategy
egy. In order to evaluate the ERP implementation strategy after an implementation orientation is chosen, we distinguish three phases: intelligence phase, implementation phase, and evaluation phase. In the first phase, the scope of standardization is set by HQ which could have major effects on project duration and costs. If this phase is planned and executed well, it just has to be done once in the ERP project. In the second phase, there is the choice among a big bang or phased implementation approach. To analyze this phase in more detail, there are other models such as those by Markus and Tanis (2000) or Somers and Nelson (2004) who divided the single implementation into several other phases. Due to the disregard of an organization’s business orientation, this view seems to be too limited because the effects on the ERP strategy are not considered. The data that were collected in the implementation phase must be analyzed in the third, evaluation phase. After the experiences of prior implementations, the sequence of future implementations could be rearranged. Depending on the outcomes of this phase, the intelligence phase might have to be revised. This model should enable an organization to plan its ERP strategy and incorporate learning effects.
78
In the following, the three phases are—by way of example—illustrated by the two polar cases global rollout and local implementation.
Global rollout Intelligence Phase First, there is a scanning phase which comprises the collection of important information regarding established business processes, needed data/data formats, interfaces, subsystems, national regulation, and so on. Once this information is gathered, it is used by a strong moderator (most likely HQ) to generate a template. The moderator identifies core processes, adjusts these processes to unify them, and therefore defines the template’s core processes. The template covers approximately 80-90% of all business processes (Umble et al., 2003). This could be a time-consuming and expensive process depending on the number and the heterogeneity of sites. Furthermore, this process becomes more complicated if the sites are located in different cultural areas (Davison, 2002; Soh, Sia, & Tay-Yap, 2000). Subject to their strategic meaning or role, not all sites have the same stake in this process.
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
Implementation Phase
Implementation Phase
Once the template is generated, the actual global rollout can begin. The usage of the template may cause a business process redesign within the sites, but still grants an adaptation to local requirements in the range of 10-20%. Resistance to change should be low because all sites were participants of the process producing the template. As there is only limited potential for customization, the duration of the implementation phase should be significantly shorter than of the intelligence phase.
In this phase each site has to be examined by local management. The result could be a fully customized ERP system, only obeying the constraints mentioned above. Of course, every single implementation bears the same risks of failure. Meeting the requirements of the single site will increase this phase’s time and money requirements. Afterwards the system is well tailored to the individual site, but each implementation will repeat the whole process over and over again.
Evaluation Phase
Evaluation Phase
The subsequent evaluation phase helps the organization to improve its project management skills and therefore to reduce the time needed to introduce the template into further sites. Meanwhile, the sequence of ERP implementations is checked, for example with regard to excellent project management techniques and a well-built template, so that strategically important sites are served first.
The outcome of the evaluation phase depends on the kind of implementation procedure. If there was a bin bang (all sites at one time) approach, no learning effects would be realized and the result of the implementation process might be unpredictable. If it was a phased implementation, learning effects can be realized through “reuse” of processes or parts which could shorten the time for the next implementation phase. Figure 2 shows a schematic demonstration of how the allocation of time and money could differ in the two approaches. We propose these curve progressions because of the above mentioned arguments that a global rollout starts with a high budget in the intelligence phase to create a template, but could reduce implementation costs because of the repetition and learning effects from using the same process over and over again. On the other hand, the costs of the local implementation start on a low level and increase over the course of the project.
local Implementation Intelligence Phase In contrast to global rollout, only the information needs of HQ are relevant. After identifying the important data which every site has to deliver, HQ determines the data, data formats, and interfaces for interconnection within the structure. This differs from the template because it is not a finished system that could be easily implemented in each site; instead it creates constraints and demands to them for their implementation or data support. It is imaginable that HQ uses the collected data for performance comparison of each site. To reduce information asymmetries or simplify comparison, channels of data interchange may have to be standardized.
conclusIon Implementation of an ERP system possibly takes years and cost millions (Mabert et al., 2001). Therefore, organizations must manage this process very carefully. Some approaches try to forecast
79
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
Figure 2. Allocation of time/money during project phases
the success of such a step (Magnusson & Nilsson, 2004) and distinguish between several dimensions. Only rarely, the implied ERP implementation strategy of such an organization method is mentioned or examined. The exploration of the relationship between business strategy and—as recommended—the derived ERP implementation strategy can help: a. b.
Management to detect and identify the effects of the choice of an ERP strategy, and IT departments (e.g., represented by a CIO) to explain the complexity and thus the required support by top management to lead a (successful) ERP implementation in their organization.
The framework developed in this chapter gives hints regarding important considerations and possible pitfalls in planning a global ERP implementation process. Based on these insights the presented model can help to monitor, control, and adjust the derived ERP strategy in order to meet management’s expectations of the ERP system.
Future trends And reseArcH dIrectIons In this chapter we implicitly assumed that the implementation of an ERP system has no effects 80
on the structure of a corporation. The structure of an organization or the orientation of an MNC was considered as fixed and given. However, as we mentioned in the section ‘Changing the Business Orientation’, it could be one of top management’s objectives to use an ERP implementation process strategically. This strategic use would imply that the implementation (process) of ERP systems has a strong effect on organizations. It would also raise the question whether ERP systems can achieve such changes in structure by themselves or whether they act as a facilitator only. In the first case, the ERP project becomes more complex and challenging for both top management and project management. In the latter one, top management must consider additional measures which enable the ERP project to reach the intended change or measures that are supported by the ERP project. Top management as well as project management must bear in mind that those additional measures can also have negative side effects on each other. What kind of measures would be appropriate, how they support ERP projects and how they are supported by such projects, and in what useful combinations they can be applied are interesting questions for future research. Furthermore, if one allows for intentional effects of ERP implementation processes on corporate structure, it becomes necessary to address the question of unintended effects. These will not only regard ERP implementation processes in corporations
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
that are actively trying to change their structures, but also ERP projects in fixed structures which were discussed in this chapter. However, we did not discuss the implications of unintended effects on ERP implementation processes due to limitation of space and reasons of complexity. The model presented in this chapter allows for enhancement by future work and research.
Butler, T., & Pyke, A. (2003). Examining the influence of ERP systems on firm-specific knowledge and core capabilities: A case study of SAP implementation and use. In Proceedings of the 11th European Conference on Information Systems.
AcknoWledGment
Davenport, T. H. (1998). Putting the enterprise into the enterprise system. Harvard Business Review, 76(4), 121–131.
The authors would like to thank the editors and anonymous reviewers for their useful comments and suggestions. Furthermore, the authors would like to thank Guido Schryen of RWTH Aachen University for his help in editing this chapter.
reFerences Akkermanns, H., & van Helden, K. (2003). Vicious and virtuous cycles in ERP implementation. European Journal of Information Systems, (11): 35–46. Bancroft, N., Sprengel, A., & Seip, H. (1996). Implementing SAP R/3: How to introduce a large system into a large organization. Englewood Cliffs, NJ: Prentice Hall. Bartlett, C. A., & Ghoshal, S. (1998). Managing across borders—The transnational solution. Boston: Harvard Business School Press. Bingi, P., Sharma, M. K., & Godla, J. K. (1999). Critical issues affecting an ERP implementation. Information Systems Management, 16(3), 7–14. doi:10.1201/1078/43197.16.3.19990601/31310.2 Brown, C. V., & Vessey, I. (2000). NIBCO’S “big bang.” In Proceedings of the International Conference on Information Systems.
Clemmons, S., & Simon, J. S. (2001). Control and coordination in global ERP configuration. Business Process Management Journal, 7(3), 205–215. doi:10.1108/14637150110392665
Davison, R. (2002). Cultural complications of ERP. Communications of the ACM, 45(7), 109–111. doi:10.1145/514236.514267 Earl, M. J. (1993). Experience in strategic information systems planning. MIS Quarterly, 17(1), 1–24. doi:10.2307/249507 Esteves, J., & Pastor, J. (1999). An ERP lifecycle-based research agenda. Proceedings of the International Workshop on Enterprise Management Resource Planning Systems (EMRPS) (pp. 359-371), Venice, Italy. Esteves, J., & Pastor, J. (2001). Enterprise resource planning systems research: An annotated bibliography. Communications of the AIS, 7(8), 1–52. Esteves, J., Pastor, J., & Casanovas, J. (2002). Monitoring business process redesign in ERP implementation projects. Retrieved April 13, 2007, from http://coblitz.codeen.org:3125/citeseer.ist. psu.edu/cache/papers/cs/31804/http:zSzzSzwww. lsi.upc.eszSz~jesteveszSzBPR_amcis2002.PDF/ esteves02monitoring.pdf Firestone, J. M. (2002). Enterprise information portals and knowledge management. Boston: Butterworth-Heinemann.
81
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
Fitzgerald, B., Russo, N. L., & O’Kane, T. (2003). Software development method tailoring at Motorola. Communications of the ACM, 46(4), 65–70. doi:10.1145/641205.641206
Koch, C. (2001). BPR and ERP: Realizing a vision of process with IT. Business Process Management Journal, 7(3), 258–265. doi:10.1108/14637150110392755
Ghosh, S., & Ghosh, S. (2003). Global implementation of ERP software—Critical success factors on upgrading technical infrastructure. In Proceedings of the International Engineering Management Conference—Managing Technologically Driven Organizations: The Human Side of Innovation and Change (pp. 320-324).
Krumbholz, M., Galliers, J., Coulianos, N., & Maiden, N. A. M. (2000). Implementing enterprise resource planning packages in different corporate and national cultures. Journal of Information Technology, 15(4), 267–279. doi:10.1080/02683960010008962
Gunson, J., & de Blais, J.-P. (2002). Implementing ERP in multinational companies: Their effects on the organization and individuals at work. Retrieved April 13, 2007, from http:// www.hec.unige.ch/recherches_publications/ cahiers/2002/2002.07.pdf Holland, C. P., & Light, B. (1999). A critical success factors model for ERP implementation. IEEE Software, 16(3), 30–36. doi:10.1109/52.765784 Holland, C. P., Light, B., & Gibson, N. (1998). Global enterprise resource planning implementation. In Proceedings of the Americas Conference on Information Systems. Hong, K.-K., & Kim, Y.-G. (2002). The critical success factors for ERP implementation: An organizational fit perspective. Information & Management, 40, 25–40. doi:10.1016/S03787206(01)00134-3 Huber, T., Alt, R., & Österle, H. (2000). Templates—Instruments for standardizing ERP systems. In Proceedings of the 33rd Annual Hawaii International Conference on System Sciences. Karimi, J., & Konsynski, B. (1991). Globalization and information management strategies. Journal of Management Information Systems, 7(4), 7–26.
82
Lehm, L., Heinzl, A., & Markus, L. M. (2001). Tailoring ERP systems: A spectrum of choices and their implications. In Proceedings of the 34th Hawaii International Conference on Systems Science. Light, B. (1999). Realizing the potential of ERP systems: The strategic implications of implementing an ERP strategy: The case of global petroleum. Electronic Markets, 9(4), 238–241. doi:10.1080/101967899358914 Light, B. (2005). Potential pitfalls in packaged software adoption. Communications of the ACM, 48(5), 119–121. doi:10.1145/1060710.1060742 Luo, W., & Strong, D. M. (2004). A framework for evaluating ERP implementation choices. IEEE Transactions on Engineering Management, 51(3), 322–333. doi:10.1109/TEM.2004.830862 Mabert, V. A., Soni, A., & Venkataramanan, M. A. (2001). Enterprise resource planning: Common myths versus evolving reality. Business Horizons, 44(3), 69–76. doi:10.1016/S00076813(01)80037-9 Madapusi, A., & D’Souza, D. (2005). Aligning ERP systems with international strategies. Information Systems Management, 22(1), 7–17. doi:1 0.1201/1078/44912.22.1.20051201/85734.2
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
Magnusson, J., & Nilsson, A. (2004). A conceptual framework for forecasting ERP implementation success. In Proceedings of the International Conference on Enterprise Information Systems (pp. 447-453). Markus, L. M., Axline, S., Petrie, D., & Tanis, C. (2000). Learning from adopters’ experiences with ERP: Problems encountered and success achieved. Journal of Information Technology, 15(4), 245– 265. doi:10.1080/02683960010008944 Markus, L. M., Tanis, C., & van Fenema, P. C. (2000). Enterprise resource planning: Multisite ERP implementations. Communications of the ACM, 43(4). doi:10.1145/332051.332068 Markus, M. L., & Tanis, C. (2000). The enterprise systems experience—From adoption to success. In R.W. Zmud (Ed.), Framing the domains of IT research: Glimpsing the future through the past (pp.173-207). Cincinnati, OH: Pinnaflex Educational Resources. Ng, J. K. C., Ip, W. H., & Lee, T. C. (1999). A paradigm for ERP and BPR integration. International Journal of Production Research, 37(9), 2093–2108. doi:10.1080/002075499190923 O’Leary, D. E. (2000). Enterprise resource planning systems: Systems, life cycle, electronic commerce and risk. Cambridge, UK: Cambridge University Press. Oliver, D., & Romm, C. (2002). Justifying enterprise resource planning adoption. Journal of Information Technology, 17(4), 199–213. doi:10.1080/0268396022000017761 Reimers, K. (1997). Managing information technology in the transnational organization: The potential of multifactor productivity. In Proceedings of the 30th Annual Hawaii Conference on System Sciences.
Scheer, A.-W., & Habermann, F. (2000). Making ERP a success. Communications of the ACM, 43(4), 57–61. doi:10.1145/332051.332073 Scott, J. E. (1999). The FoxMeyer Drug’s bankruptcy: Was it a failure of ERP? In Proceedings of the Association for Information Systems 5th Americas Conference on Information Systems. Soh, C., Sia, S. K., & Tay-Yap, J. (2000). Cultural fits and misfits: Is ERP a universal solution? Communications of the ACM, 43(4), 47–51. doi:10.1145/332051.332070 Somers, T. M., & Nelson, K. G. (2004). A taxonomy of players and activities across the ERP project life cycle. Information & Management, 41(3), 257–278. doi:10.1016/S0378-7206(03)00023-5 Sumner, M. (2000). Risk factors in enterprise-wide/ERP projects. Journal of Information Technology, 15, 317–327. doi:10.1080/02683960010009079 Umble, E. J., Haft, R. R., & Umble, M. M. (2003). Enterprise resource planning: Implementation procedures and critical success factors. European Journal of Operational Research, 146(2), 241–257. doi:10.1016/S0377-2217(02)00547-7 Ward, J., Griffiths, P., & Whithmore, P. (1990). Strategic planning for information systems. Chicester: John Wiley & Sons.
AddItIonAl reAdInG Adam, F., & O’Doherty, P. (2000). Lessons from enterprise resource planning implementations in Ireland—Towards smaller and shorter ERP projects. Journal of Information Technology, 15(4), 305–316. doi:10.1080/02683960010008953 Bakos, Y. (1986). Information technology and corporate strategy: A research perspective. MIS Quarterly, 10(2), 106–119. doi:10.2307/249029
83
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
Biehl, M. (2007). Success factors for implementing global information systems. Communications of the ACM, 50(1), 53–58. doi:10.1145/1188913.1188917 Chien, S.-W., Hu, C., Reimers, K., & Lin, J.-S. (2007). The influence of centrifugal and centripetal forces on ERP project success in small and medium-sized enterprises in China and Taiwan. International Journal of Production Economics, 107(2), 380–396. doi:10.1016/j.ijpe.2006.10.002 Dowlatshahi, S. (2005). Strategic success factors in enterprise resource planning design and implementation: A case-study approach. International Journal of Production Research, 43(18), 3745–3771. doi:10.1080/00207540500140864 Fleisch, E., Österle, H., & Powell, S. (2004). Rapid implementation of enterprise resource planning systems. Journal of Organizational Computing and Electronic Commerce, 14(2), 107–126. doi:10.1207/s15327744joce1402_02 Gattiker, T., & Goodhue, D. (2005). What happens after ERP implementation: Understanding the impact of inter-dependence and differentiation on plant-levels outcome. MIS Quarterly, 29(3), 559–585. Ghoshal, S., & Bartlett, C. (2001). The multinational corporation as an interorganizational network. Academy of Management Review, 15(4), 603–625. doi:10.2307/258684 Gosain, S. (2004). Enterprise information systems as objects and carriers of institutional forces: The new iron cage? Journal of the Association for Information Systems, 5(4), 151–182. Grossman, T., & Walsh, J. (2004). Avoiding the pitfalls of ERP system implementation. Information Systems Management, 21(2), 38–42. doi:10. 1201/1078/44118.21.2.20040301/80420.6
84
Hanseth, O., & Braa, K. (1998). Technology as a traitor: Emergent SAP infrastructure in a global organization. In R. Hirscheim, M. Newman, & J.I. DeGross (Eds.), Proceedings of the 19th International Conference on Information systems (pp. 188-197). Henderson, J., & Venkatraman, N. (1999). Strategic alignment: Leveraging information technology for transforming organizations. IBM Systems Journal, 38(2&3), 472–484. Huang, S. M., Hung, Y. C., Chen, H. G., & Ku, C. Y. (2004). Transplanting the best practice for implementation of an ERP system: A structured inductive study of an international company. Journal of Computer Information Systems, 44(4), 101–110. Ioannou, G., & Papadoyiannis, C. (2004). Theory of constraints-based methodology for effective ERP implementations. International Journal of Production Research, 42(23), 4927–4954. doi:1 0.1080/00207540410001721718 King, W. (2006). Ensuring ERP implementation success. Information Systems Management, 22(3), 83–84. doi:10.1201/1078/45317.22.3.20050601/ 88749.11 Krumbholz, M., & Maiden, N. (2001). The implementation of enterprise resource planning packages in different organisational and national cultures. Information Systems, 26(3), 185–204. doi:10.1016/S0306-4379(01)00016-3 Lall, V., & Teyarachakul, S. (2006). Enterprise resource planning (ERP) system selection: A data envelopment analysis (DEA) approach. Journal of Computer Information Systems, 47(1), 123–127. Luftman, J., Lewis, P., & Oldach, S. (1993). Transforming the enterprise: The alignment of business and information technology strategies. IBM Systems Journal, 32(1), 198–221.
A Conceptual Framework for Developing and Evaluating ERP Implementation Strategies
Mahato, S., Jain, A., & Balasubramanian, V. (2006). Enterprise systems consolidation. Information Systems Management, 23(4), 7–19. doi: 10.1201/1078.10580530/46352.23.4.20060901 /95108.2 Ng, C., Gable, G., & Chan, T. (2003). An ERP maintenance model. In Proceedings of the 36th Hawaii International Conference on System Sciences. Parr, A., & Shanks, G. (2000). A model of ERP project implementation. Journal of Information Technology, 15(4), 289–303. doi:10.1080/02683960010009051 Rathnam, R. G., Johnsen, J., & Wen, H. J. (2004/05). Alignment of business strategy and IT strategy: A case study of a Fortune 50 financial service company. Journal of Computer Information Systems, 45(2), 1–8. Reich, B., & Benbasat, I. (1996). Measuring the linkage between business and information technology objectives. MIS Quarterly, 20(1), 55–81. doi:10.2307/249542
Reimers, K. (2003). International examples of large-scale systems—Theory and practice: Implementing ERP systems in China. Communications of the AIS, 11, 335–356. Ribbers, P., & Schoo, K.-C. (2002). Program management and complexity of ERP implementations. Engineering Management Journal, 14(2), 45–52. Somers, T., & Nelson, K. (2003). The impact of strategy and integration mechanisms on enterprise system value: Empirical evidence from manufacturing firms. European Journal of Operational Research, 146(2), 315–338. doi:10.1016/S03772217(02)00552-0 Teltumbde, A. (2000). A framework for evaluating ERP projects. International Journal of Production Research, 38(17), 4507–4520. doi:10.1080/00207540050205262 Voordijk, H., Van Leuven, A., & Laan, A. (2003). Enterprise resource planning in a large construction firm: Implementation analysis. Construction Management and Economics, 21(5), 511–521. doi:10.1080/0144619032000072155
This work was previously published in Enterprise Resource Planning for Global Economies: Managerial Issues and Challenges, edited by Carlos Ferran and Ricardo Salim, pp. 37-54, copyright 2008 by Information Science Reference (an imprint of IGI Global).
85
86
Chapter 1.6
Integrated Research and Training in Enterprise Information Systems Chen-Yang Cheng Penn State University, USA Vamsi Salaka Penn State University, USA Vittal Prabhu Penn State University, USA
AbstrAct The success of implementing Enterprise Information System (EIS) depends on exploring and improving the EIS software, and EIS software training. However, the synthesis of the EIS implementation approach has not been investigated. In this chapter, the authors propose an integrated research and training approach for students and employees about enterprise information systems (EIS) that are encountered in an organization. Our integrated approach follows the different stages of a typical EIS project from inception to completion. These stages, as identified, are modeling, planning, simulation, transaction, integration, and control. This ensures that an employee who
is trained by this plan has an acquaintance with the typical information systems in an organization. Further, for training and research purposes the authors developed prototype information systems that emulate the ones usually found in organizations. This ensures the EIS software logic is consistent with the business logic. This chapter also discuss some of the case studies conducted with the prototype systems.
IntroductIon Enterprise Information Systems (EIS) constitute the spectrum of information technology solutions used by an organization and influence nearly all aspects of operations of an organization. Typical
DOI: 10.4018/978-1-60566-146-9.ch011
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Integrated Research and Training in Enterprise Information Systems
EIS systems such as Enterprise Resource Planning (ERP), Customer and Supplier Relationship Management (CRM and SRM), and Manufacturing Execution Systems (MES). It is widely accepted that EIS deliver great rewards, but the risks these systems carry are equally important. If an organization rolls out an EIS without analyzing the business implications, the logic of the system may conflict with the logic of the business (Subramanian & Hoffer, 2005). This may result in failure of implementation, wasting vast sums of money, and weakening important sources of organization’s competitive advantage. Prior research has investigated and identified the critical factors that influence the successful implementation of EIS. Education and training of employees on EIS is one of the most widely recognized factors (Umble & Haft, 2003) (H. Hutchins, 1998; Ptak & Schragenheim, 2000) (Laughlin, 1999). EIS implementation requires the knowledge of the employees to smoothly carry on the business process and further solve problems that arise from the new system. Even with good technical assistance, the complete potential of EIS cannot be realized without employees having an appreciation of the capabilities and limitations of the system (Somers & Nelson, 2001). To make the employee training successful, it is agreed that it should start early, preferably well before the rollout of EIS begins. The upper management in large manufacturing enterprises often underestimate the level of education and training necessary to implement EIS and as well as its associated costs. It has been suggested that reserving 10–15% of the total EIS implementation budget is a good practice to ensure the employees receive enough training (McCaskey & D. Okrent, 1999) (Umble & Haft, 2003). With the estimated budgets for implementing EIS in billions of dollars the cost of training the employees on these systems is a very sizeable portion (Hong & Kim, 2002). These costs can be brought down if the employees have a prior education and training on EIS. Hence, at the Center for Manufacturing and Enterprise Integra-
tion (CMEI) at the Penn State University focuses on training students and working professional in EIS and related enterprise integration issues. Further, as part of research at CMEI, projects are undertaken to study the information system infrastructure for planning in Small Manufacturing Enterprise (SME). These projects were aimed at improving the operations management for SMEs with the help of Information Technology (IT). We found many avenues for improvement both in operations management and information systems, but there were barriers in implementing such projects. These barriers vary from capital and complexity of the systems to human inertia to change. But the key barrier we noticed among SMEs is the lack of tools or expertise for handling the EIS software. Most of the EIS software available requires extensive training for its use and maintenance. SMEs cannot afford this training as it involves hiring specialized engineers and consultants. Based on our experience with SMEs and to reduce training costs for EIS deployment in large manufacturing enterprises, we developed a rollout plan which encompasses the different settings in an organization that employees use EIS. The important stages that we identified are the Modeling, Planning, Simulation, Integration, Execution and Control. We developed prototype software for each stage that emulates the functionalities of typical organizational software. In the following sections, we present an overall view of our rollout plan and then elaborate on the individual stages. In this work, our focus has been mostly centered on information systems that are typical in manufacturing organizations, but it can be readily extend to other industry segments.
rollout PlAn For trAInInG In enterPrIse InFormAtIon systems The rollout plan developed for training in EIS was modeled to follow the different stages that a new
87
Integrated Research and Training in Enterprise Information Systems
project undergoes from inception to completion as shown in Figure 1. Following this procedure it is envisioned that training employees or students would introduce them to the spectrum of EIS software that are used in an organization. We have developed prototype software that would emulate the working of industry standard EIS software. It is important to note that the purpose of developing prototype software for different stages in the rollout plan is to reduce the complexity while maintaining the fundamental concepts found in industrial strength software. The different stages that are part of our rollout plan are as follows: •
•
In the modeling phase the requirements and objectives of a project are established. In general the requirements are captured by enterprise models using modeling techniques such as data flow diagrams (DFD), and unified modeling language (UML). The typical software used in industry for developing enterprise models are Microsoft Visio® and Rational Rose®. For training purposes we developed a tool customizing Microsoft Visio® to provide user friendly interfaces supporting a wide variety of modeling techniques (Mathew, 2003). The planning phase is where organizations evaluate alternatives, do what-if analysis for designing and to come up with a plan of execution. In general the planning phase is strongly tied with the simulation phase where the feasibility and the success of the proposed plans are evaluated by the developed simulation models. For training pur-
Figure 1. Rollout plan for training in EIS
88
•
•
•
poses, we developed Sensors to Suppliers (S2S) planning and simulation tool (Mehta, 2003). The integration phase is where we focus our discussion on the need and advantages of seamless integration of various information systems within the organization. To train and demonstrate how data level integration works, we have developed Schema Lens software, which extracts the schema from a source database and has the capabilities to map it with transformations to a destination. The transaction phase is where organizations execute the business process to create customer value. In general, the transaction system in an enterprise constitutes the Customer Relationship Management (CRM), and Enterprise Resource Planning (ERP) system. CRM provide a single view of all customer interactions, and campaign management for personalized services. ERP systems execute a fixed sequence of defined functions such as purchasing or selling. They lack the flexibility to implement varying business process. Hence, we developed the automated workflow system which integrates the ERP system with web services. The control phase in manufacturing systems is where the data collected from the shop floor is fed back to business systems to make informed decisions. Here we discuss the Radio Frequency Identification (RFID) data collection techniques with distributed
Integrated Research and Training in Enterprise Information Systems
computing to solve computational problems in manufacturing. For training purposes, we implemented the RFID-based Manufacturing Execution System (MES) for updating the cutting tool information in real-time and built a library for distributed computing (Cheng & Prabhu, 2007a,, 2007b,, 2007c).
enterPrIse modelInG There is a general lack of documentation and standardization of business processes in organizations. This lack of documentation leads to inconsistent formats and data. The flow of documents is often not clear, leading to gaps in these processes. For implementation of information systems, it is very important that an organization has standardized business processes. Enterprise modeling facilitates the understanding of business processes and relations that exist within and across the various departments of an organization (Kamath, Dalal, Chaugule, Sivaraman, & Kolarik, 2003). It is
important to train the users of an EIS system in enterprise modeling to familiarize them with the organization’s business process and provide an understanding of how EIS software drives the business process. For the purpose of training and conducting related research we developed an Integrated Decision Support (IDS) framework by enriching the interfaces in Microsoft Visio® for more specific data collection and modeling requirements. The customized environment is shown in Figure 2. There exists a number of enterprise modeling techniques employed in the industry and are discussed by (Kamath, Dalal, Chaugule, Sivaraman, & Kolarik, 2003). We support a wide range of them in IDS for the following reasons: A particular modeling tool can be better suitable for expressing a part of the problem; and different users can have their own preference and comfort with a particular modeling tool. In one of the case studies we conducted with this tool, we were able to work with a defense and aerospace organization to capture their business process and IT infrastructure to identify the requirements for enterprise
Figure 2. Enterprise modeling environment customized in Microsoft Visio®
89
Integrated Research and Training in Enterprise Information Systems
integration. Further, we were able to evaluate various available integration choices collaboratively based on the estimates of cost and complexity. The detailed functioning of IDS along with the case study is presented in a related work (Salaka & Prabhu, 2006).
PlAnnInG And sImulAtIon One of the most common information system deployments in manufacturing is for identifying planning requirements. These systems maintain order status and explode it into material and operational requirements based on bill of materials. Further, these systems act as a bookkeeping tool and are linked to accounting. Simulation allows the quantitative analysis of the operation policies made in the planning phase. As computing power becomes inexpensive, using software simulation for analysis is becoming much widely acceptable. In general the planning phase is strongly tied with simulation where the feasibility and the success of the proposed plans are evaluated by the developed simulation models. We developed a Sensors to Supplier (S2S) tool that enables both planning and simulation in manufacturing systems with their complex structure and behavior. A brief description of the S2S tool is presented in the following section.
sensor to supplier Planning and simulation tool The S2S simulation tool contains a rich library of components that can be configured to represent a particular business. The components support different policies and, the user controls decisionmaking rules and the quantitative variables. Apart from that, the tool provides an interactive and intuitive user interface for model development and performance analysis. The elements of the modeling tool and their function are shown in Figure 3. The objective of the tool is to capture
90
the business model as quickly as possible and in as much detail as desired by the user. Once the business model is captured, it can be simulated and desired metrics can be analyzed. The user can then iteratively change planning policies and quantitative variables of the model, add and remove components to gain insights under different scenarios. S2S is implemented in object oriented programming language on the Microsoft .Net platform. The simulation is executed by having objects, representing business components interacting with each other. Using the development interface, business models can be created, modified, saved and merged. S2S is currently being used in an academic setting to educate and train students in simulation for evaluating various planning policies in manufacturing systems. Further, it has been an attractive tool for SMEs who lack the capital, experience, and expertise in using other commercial simulation tools. We demonstrated the use of S2S in SMEs by a case study performed in a wood manufacturing enterprise. The details of the case study are presented in a related work (Salaka, Mehta, & Prabhu, June 2005).
enterPrIse InteGrAtIon Enterprise integration (EI) is creating new business solutions by effectively utilizing the capabilities of existing software applications by allowing rapid movement of information between them. EI is strategically important for improved business process management. The market for EI was valued at $8.3 Billion for the year 2004-2005 and was expected to grow 20% during the year 20052006 (Salaka & Prabhu, 2006), which indicates the emphasis on integration in the industry. The knowledge of enterprise integration is an essential part of training on EIS software for students and organizational employees to make them realize the potential of integrated enterprises. As part of our rollout plan, we train students on the con-
Integrated Research and Training in Enterprise Information Systems
Figure 3. S2S Modeling tool
temporary standards and techniques in EI. The training plan on integration includes familiarizing the undergraduate students with typical XML features that are supported currently by most of the industry software for data integration. For this, we demonstrate the XML features that are in Microsoft Office® software and how these features can be used in migrating data between software applications. Figure 4 shows one of the demonstrations to the students as part of the training. For research purposes and acquaint students industry standard integration software, we developed Schema Lens data level integration software, which is described next.
schema lens Figure 5 shows the framework of the software. Schema Lens transfers the data from a legacy system to a production system extracting schema and data as an XML file from the source database using a Java® application. The front end for the application is a web browser. The browser connects to a web application residing in a web server. Further the web server contacts the destination database and enables the mapping XML data
from the source to the tables and columns of the destination database. Currently, Schema Lens is being used as a test bed for a PhD student’s work.
control oF mAnuFActurInG systems The general idea of control is to use feedback mechanism from sensor to adjust a system to obtain desired results. In a manufacturing system, the feedback from shop floor can improve business decisions. We developed RFID-based Manufacturing Execution System for the purpose of training and related research in latest advances of manufacturing control with the advent of RFID technology. The current system is modeled to address the tool management issues in machining. Further, to demonstrate the importance of having faster solution frameworks that can take advantage of large data generated from RFID technology in manufacturing control, we built PennDiCon, a library for distributed computing. In the following sections, we detail the functioning of the above system.
91
Integrated Research and Training in Enterprise Information Systems
Figure 4. Demonstration of XML features in Microsoft Office®
rFId-based manufacturing execution system Figure 6 illustrates the RFID-based Manufacturing Execution System framework for tool management. Each Computer Numerical Control (CNC) Figure 5. Framework of Schema Lens
92
tool is attached with a RFID tag that records the tool’s usage time, preset data (projection, edge radius, and effective diameter), and operation data (speeds, feeds and depth of cut). When the tool is attached to the spindle, the RFID reader on the CNC machine is configured to read the
Integrated Research and Training in Enterprise Information Systems
data from the RFID tags. The purpose of this avoids all sources of potential data entry error, tool offsetting errors and possible tool crashes. After the completion of each operation, the RFID encoder on the CNC machine is programmed to write the usage time of the tool onto the RFID tag and further update the Quality Performance Management (QPM) system. Based on the demand for tool replacement as scheduled by QPM, a purchase order can be created from ERP in advance to avoid stock shortages and reduce downtime.
distributed computing -Penndiconn PennDiCon is a distributed computing library that provides several communication functionalities such as peer-to-peer, multicast and broadcast, which can be called from C/C++ programs. In distributed computing environment, any entity can communicate with another entity or a group of entities using PennDiCon, allowing it to dynamically discover resources in the system. The detailed architecture of PennDiConn along with the case study is presented in a related work (Sharma & Prabhu, 2005).
trAnsActIon In enterPrIse InFormAtIon system A business transaction is a process that has two or more partners communicating to synchronize the related activity states in all information systems for the purpose of creating customer value. The transaction system of the outbound business comprises CRM system. CRM enables business to track and manage all of their customers’ interactions over the lifetime, from first contact through to purchase and post-sales. On the other hand, the transaction system in an enterprise constitutes ERP system. ERP systems execute a fixed sequence of defined functions such as purchasing or selling. However, existing CRM or ERP systems fail to adapt to frequent requests for changing business processes. Therefore, we developed a workflow system, Process Automation with Web Services (PAWS), which support modification of business logics at runtime. A brief description of the training and research in transaction system is presented in the following section.
outbound transaction system Outbound transaction covers the types of retailing, channels, tools for data collection, and transaction standard. The adopting of Information technologies in the outbound transaction system includes
Figure 6. RFID-base MES System
93
Integrated Research and Training in Enterprise Information Systems
CRM, point of sale systems, barcode, RFID/EPC, global data synchronization, EDI, XML, and web services. Students learn the hands-on experimentation/demonstrations such as Microsoft CRM, PC-based POS terminals, scanners, RFID tags and readers, and integrated software for store and home office using Microsoft Retail Management System (RMS). Training course includes a group project in which students will design and construct an IT application focused on retail. A single view of all customer interactions, and campaign management for personalized services using MSCRM software is adopted in training. It helps student to learn how to target new customers, and manage marketing campaigns, enabling closer relationships with customers.
class on manufacturing systems. Typically, this training incorporates a day or two of lectures on how business processes need to work in tandem to make a business successful and how an ERP software such as SAP® are used in the industry to accomplish the same. For a small to medium size company, ERP software could be in-house development software such as TEAM (Total Enterprise Application Manger). TEAM was developed in Microsoft Access® by Georgia Technology Economic Development Institute, shown in Figure 7. By using Visual Basic Application, (VBA), students can learn to develop or extend the ERP software, and also get exposure to the software through hands-on exercises.
Inbound transaction system- erP
Process Automation with Web services (PAWs)
ERP constitutes the typical transaction system within an organization. This system provides information systems support for planning, production, marketing, and finance. Hence, it is important to educate students and employees about this software. To learn the prevalent ERP software, we currently have the training on Production Planning module of SAP® part of a senior undergraduate
PAWS is an automatic workflow system for tool quotation process within and between the supply chains. It extends the RFID-based MES discussed in the previous section. Figure 8 illustrates the framework of PAWS. In PAWS, ERP broadcasts the tool requirements to the web services of certificated suppliers registered with the Universal Description Discovery and Integration (UDDI)
Figure 7. Mastering Scheduling in the TEAM ERP
94
Integrated Research and Training in Enterprise Information Systems
server. The web services in each supplier’s server verify the inventory and price in its own database and respond to the quotation based on its current status. The business logic of the Request for Quotation (RFQ) process in PAWS decides the candidate supplier and starts another RFQ process for logistic provider. This automation workflow system not only integrates the inner-business process including QPM, ERP, and mailing system, but also combines the business-to-business (B2B) process in the supply chain. Figure 9 shows the quotation process which broadcasts the RFQ synchronous to each supplier. Thus PAWS demonstrates to students the use of work flow systems to automate a transaction process to shrink time consuming RFQ process(Cheng & Prabhu, 2007a,, 2007b,, 2007c).
student exPerIences Given the breadth of the proposed rollout plan, there is no one course or an example that can be used to train the students in the rollout plan. Even then, different categories of students including graduate students, undergraduate students, and
industry practitioners benefited from this training. The following is the summary of training experiences from various stages of the rollout plan: •
•
Microsoft Visio has been used to train undergraduate students in enterprise modeling techniques such as UML and IDS was used as a test bed in a master’s thesis (Salaka, 2004). Further, IDS has been used in an industry project to demonstrate its capabilities to analyze an enterprise’s business process and identify requirements for integration (Salaka & Prabhu, 2006). In simulation and planning, Sensors to Suppliers Chassis (S2S) is being primarily used in two ways. One way to show industry managers the cost Vs benefit analysis of various policies in manufacturing systems. The other use of S2S is in academic settings where it is being used to educate students in evaluating various manufacturing control policies and as an example to train students in object-oriented software. Also S2S is being used in an undergraduate honors thesis to study the advantages
Figure 8. The framework of PAWS execution
95
Integrated Research and Training in Enterprise Information Systems
Figure 9. Quotation process
•
•
96
of the tie-in between a simulation environment and an ERP system. In the integration part of the rollout plan, undergraduate students are trained on XML features in software applications useful in integrating software applications. Further, they are familiarized with web services and the impact of service oriented architectures on an enterprise’s business process. Further, the software tool Schema Lens developed is being used as a test bed for a PhD student’s research. In transaction, students are trained on SAP® modules to illustrate how business processes need to work in tandem to make a business successful. The students are provided login to the SAP® systems to learn some of the software functions and get familiarized with the typical data flow among various modules of the SAP® systems that support various business functionalities encountered in an organization. Further, PAWS, the workflow software developed, helps to illustrate to graduate students how a workflow system can work in
•
tandem with ERP and MES systems to automate an organization’s business process. The control software developed is being used as a test bed in a graduate course to study various control algorithms for manufacturing systems. It is also being used by a PhD student for research.
conclusIon In this work, we identified the need for a rollout map for education and training in implementing enterprise information systems (EIS). In accordance, we proposed a rollout map that encompassed different settings in an organization where employees use EIS. Further, we developed prototype information systems that emulate the ones that are typical in organizations. This software is currently being used in academic settings to educate and train students in using EIS. The current authors are working to incorporate the rollout plan part of the courses taught in graduate and undergraduate curriculum. The student response so far to the training has been very positive. The important
Integrated Research and Training in Enterprise Information Systems
takeaway for other researchers and academicians is that following this rollout plan the students can be familiarized with the broad spectrum of EIS systems in a systematic way. In the future, we plan to extend this training to small manufacturing enterprises that lack the experience in using EIS.
Laughlin, S. (1999). An ERP game plan. The Journal of Business Strategy, 20(1), 32–37. doi:10.1108/eb039981
reFerences
McCaskey, & Okrent, M. D. (1999). Catching the ERP second wave. APICS––The Performance Advantage, 34–38:
Cheng, C. Y., & Prabhu, V. (2007a). Applying RFID for Cutting Tool Supply Chain Management. Proceedings of the 2007 Industrial Engineering Research Conference, 637-642 Cheng, C. Y., & Prabhu, V. (2007b). Complexity Metrics for Business Process Enabled by RFID and Web Services. Proceedings of 17th International Conference on Flexible Automation and Intelligent Manufacturing (FAIM 2007), 812-819 Cheng, C. Y., & Prabhu, V. (2007c). Performance Modeling of Business Processes Enabled by RFID and Web Services. Proceedings of 6th IEEE/ACIS International Conference on Computer and Information Science (ICIS 2007), 718-723 Hong, K. K., & Kim, Y. G. (2002). The critical success factors for ERP implementation: an organizational fit perspective. Information & Management, 40, 25–40. doi:10.1016/S03787206(01)00134-3 Hutchins, H. (1998). APICS 1998 International Conference Proceedings, Falls Church, VA, 1998, pp. 356–358. Kamath, M., Dalal, N., Chaugule, A., Sivaraman, E., & Kolarik, W. (2003). A review of enterprise process modeling techniques. In V. V. Prabhu, S. Kumara & M. Kamath (Eds.), In Scalable Enterprise Systems: An Introduction to Recent Advances (pp. 1–32). Boston, MA: Kluwer Academic Publishers.
Mathew, S. (2003). Quantitative Models for Total Cost of Ownership of Integrated Enterprise Systems. Pennsylvania State University, University Park.
Mehta, R. (2003). Software Modeling Tool for Analysis of Manufacturing and Supply Networks. Pennsylvania State University, University Park. N/A. (1998). 7 key elements of a successful implementation, and 8 mistakes you will make anyway. APICS 1998 International Conference Proceedings. Falls Church, VA, 356–358 Ptak, C., & Schragenheim, E. (2000). ERP: Tools, Techniques, and Applications for Integrating the Supply Chain. Boca Raton, FL: St. Lucie Press. Salaka, V., Mehta, R., & Prabhu, V. V. (June 2005). Sensors-to-Suppliers Simulation Modeling of Manufacturing Supply Chains. Proceedings of the 15th International Conference on Flexible Automation and Intelligent Manufacturing (FAIM 2005), Bilbao, Spain Salaka, V., & Prabhu, V. V. (2006). Project Management for Enterprise Integration. The tenth IEEE conference on enterprise computing (EDOC 2006), Hong Kong Sharma, A., & Prabhu, V. V. (2005). Computing and Communication Quality of Service for Distributed Time-scaled Simulation in Heterarchical Manufacturing Control. International Journal of Modelling and Simulation.
97
Integrated Research and Training in Enterprise Information Systems
Somers, T. M., & Nelson, K. (2001). The impact of critical success factors across the stages of enterprise resource planning implementations. Proceedings of the 34th Hawaii International Conference on System Sciences, Hawaii, USA, 1–10
Umble, E. J., & Haft, R. R. (2003). Enterprise resource planning: implementation procedures and critical success factors. European Journal of Operational Research, 146(2), 241–257. doi:10.1016/S0377-2217(02)00547-7
This work was previously published in Encyclopedia of Global Implications of Modern Enterprise Information Systems: Technologies and Applications, edited by Angappa Gunasekaran, pp. 195-208, copyright 2009 by Information Science Reference (an imprint of IGI Global).
98
99
Chapter 1.7
Free and Open Source Enterprise Resources Planning Rogerio Atem de Carvalho Federal Center for Technological Education of Campos, Brazil
AbstrAct This chapter introduces the key aspects of Free/ Open Source Enterprise Resources Planning systems (FOS-ERP). Starting by related work carried out by researchers and practitioners, it argues in favor of the growing acceptance of this category of enterprise systems while showing how this subject is not yet well explored, especially by researchers. The goals of this chapter are to highlight the differences between FOS-ERP and their proprietary equivalents (P-ERP) in terms of business models, selection, customization, and evolution; and showing the challenges and opportunities that they offer to adopters, vendors, researchers, and individual collaborators. Therefore, this chapter tries to broaden the discussion around the FOS-ERP subject, curDOI: 10.4018/978-1-59904-859-8.ch003
rently focused only in cost aspects, bringing more attention to other aspects and pointing out their innovative potential.
IntroductIon Free/Open Source1 ERP (FOS-ERP) systems are gaining a growing acceptance and consequently improving their market share. According to a recent market study, FOS-ERP related services would hit about US$ 36 billion by 2008 (LeClaire, 2006). The reasons for this phenomenon are basically two: lower costs and free access to application’s source code. On the cost side, they impose reduced or no investment in licensing in general. On the access to code side stands the perception that if customization is inevitable, why not adopt a solution that exposes its code to the client company, which can freely
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Free and Open Source Enterprise Resources Planning
adapt the system to its needs? Maybe this second reason is more complex and much less studied and is addressed in many topics later in the chapter. Given this raising on FOS-ERP deployment, and the relative small number of references to this subject, instead of simply comparing functionalities of various different solutions, this chapter aims to a) present tendencies on open source software in general and open source enterprise systems that directly influence on FOS-ERP, b) highlight the differences between FOS-ERP and proprietary ERP (P-ERP) in terms of business models, selection, customization and maintenance, and c) identify the challenges and opportunities that they offer to stakeholders and developer communities.
relAted Work While increasing in market importance, FOS-ERP is still poorly analyzed by academy, where large quantities of articles put their research efforts on P-ERP deployment, project management, and economic aspects (Botta-Genoulaz, Millet & Grabot, 2005). Research on FOS-ERP software is rather deficient, and, therefore, a series of relevant aspects of FOS-ERP, which differentiate them from P-ERP, are still not well understood. As an example of this situation, a research conducted on the FOS-ERP evaluation subject, has shown how evaluating FOS-ERP brings more concerns than evaluating P-ERP (De Carvalho, 2006). One indication that FOS-ERP seems to be another situation where technology has outstripped the conceptual hawsers, is the fact that, according to Kim and Boldyreff (Kim & Boldyreff, 2005), “by September 2005 only one paper about Open Source ERP (Smets-Solanes & De Carvalho, 2003) has been published in the whole of ACM and IEEE Computer Society journals and proceedings, whereas more numerous articles have been published in non-academic industrial trade magazines.” Although nowadays more research
100
work has been done on FOS-ERP, this subject is still a new one, with many topics to be explored and tendencies to be confirmed, since the number of adopters and the operation times are still small in relation to the P-ERP figures. In fact, FOS-ERP is a barely explored research subject. As said before, the first academic paper on this specific subject was Smets-Solanes & De Carvalho (2003); the first paper on evaluating FOS-ERP is De Carvalho (2006); and the first international event on FOS-ERP was held in Vienna, Austria, also in 20062. These facts show how FOS-ERP is a young research area, with relatively very little academic effort put on it until now. However, some good work on related topics can be found. Currently the most in-depth analysis of the economic impact of Free/Open Source Software (FOSS) in enterprise systems was the one conducted by Dreiling and colleagues (Dreiling, Klaus, Rosemann & Wyssusek, 2005). The authors argue that “standards that supposedly open development by ensuring interoperability tend to be interpreted by enterprise systems global players according to their interest”. The authors follow this reasoning showing the deeper consequences of this: “[global players interests] might be incongruent with the interests of the software industry at large, those of users organizations, and may also have effects on local and national economies.” And more: “despite control of interfaces and standards by few software developers, even integration of the information infrastructure of one single company with one brand of enterprise system cannot be consolidated over time [citing many other authors].” On the open standards subject, they conclude, “software engineering principles and open standards are necessary but not sufficient condition for enterprise software development becoming less constrained by the politics of global players, responsive to user interests, and for ensuring a healthy software industry that can cater for regional market.” On the innovation side, Dreiling and colleagues state that many economists agree to the point of
Free and Open Source Enterprise Resources Planning
dominant companies – like the ERP global players – are less disposed to respond to articulated customer requirements, and monopolies as well oligopolies tend to stifle product and service innovation. Furthermore, “controlling architectures by means of proprietary software and open standards in the enterprise application industry appears to actually preclude innovation that could be of benefit for many users of enterprise systems”, which includes less developed economies. This seems to be a serious problem, since adapting is a crucial point to ERP, which by nature must be adapted to the adopter needs. This conclusion reinforces a positive consequence of the freedom of manipulating the source code by itself in FOSERP: if the vendor changes its contract terms, the client company is not locked in to a particular solution supplier (Kooch, 2004). Additionally, can two competing companies derive a strategic differential using the same ERP? Although this problem can also happen with FOS-ERP, it seems to be bigger for P-ERP, since, due to the tightly control over source code, adaptations are limited to parameterization or high-cost functionality changes through Application Program Interfaces (APIs) or proprietary languages, restricting real differentiation and raising customization costs (De Carvalho, 2006). Therefore, if the fact that integration among processes can by itself becomes a source of competitive advantage (Caulliraux, Proença & Prado, 2000), this can be extrapolated to the possibility of changing source code to drive an even better advantage. If on one hand FOS-ERP can foster innovation and give more power to adopters, on the other some important questions are yet to be answered, given that this type of FOSS is still a newcomer to the enterprise systems landscape. Even some enthusiasts recognize that FOS-ERP vendors service level have much to improve and gain experience, while in contrast, P-ERP have a mature network of consulting partners and a long history of success and failures (Serrano & Sarrieri, 2006). In fact, evaluating FOS-ERP for
instance is a subject with only a few works on it. Herzog (2006) presents a very comprehensive approach that identifies three different methods for implementing a FOS-ERP solution - select a package, develop one by itself, and integrate best of breed solutions – and five criteria for evaluating alternatives: functional fit, flexibility, support, continuity, and maturity. This method introduces the interesting possibility, not yet well explored in practice, of integrating solutions from different vendors through Enterprise Application Integration (EAI) techniques. A successful case study on mixing P-ERP solutions is described by Alshawi and colleagues (Alshawi, Themistocleous & Almadani, 2004) – but the literature lacks examples on doing the same with FOS-ERP. De Carvalho (2006) also presents an FOS-ERP evaluation method, named PIRCS, that holds some similarity with Herzog’s3, however stressing more on risk evaluation, given the strategic nature of ERP. In fact, according to Caulliraux and colleagues (Caulliraux, Proença & Prado, 2000) ERP is strategic, given that “it is a major commitment of money, and thus with long range implications even if only from a financial point of view”, and ERP systems are also important not only as a tangible asset, but “as a catalyst through their implementation in the formation of intangible assets and the company’s self-knowledge.” Aiming to include risk considerations, the PIRCS method seeks to identify weaknesses in the FOS-ERP’s development environment during its evaluation process phases. These phases name the process, and are summarized as Prepare the evaluation process, Identify the alternatives, Rate alternatives’ attributes, Compare alternatives’ results, and Select the one that best fits the adopter needs. Also related to the strategic nature of ERP, during PIRCS’ Preparation phase, the adopter must define its strategic positioning in relation to the product, behaving as a simple consumer, only getting the solution from the vendor, or becoming a prosumer (Xu, 2003), by mixing passively purchasing commodity parts of the system with
101
Free and Open Source Enterprise Resources Planning
actively developing or customizing strategic ones by itself. Of course, choosing how to behave is not a simple decision in these cases, since it involves a series of demands like expertise on the FOSERP platform and architecture, dealing with the developer community – which can mean managing demands of disparate stakeholders (West & O’Mahony, 2005), and allocating resources for development. It is a question of weighting the direct and indirect gains of developing parts of the system with the shortcomings of doing so. Many other subjects related to open software in general that affect FOS-ERP should be addressed to better understand their dynamics. Crowston and Howison (2006) assess the health of Open Source communities as a way of helping checking if an FOSS is suitable for the adopter or contributor needs – this kind of assessment can be one of the tools to check a specific FOS-ERP project maturity. Assessing FOS-ERP communities means understanding other organizations’ behavior towards the project: since ERP in general are not for individual use, contributors most of times are companies’ employees, not free-lancers. Hence, to understand the differences between this and other types of open software, it is necessary to understand how commercially sponsored and community built FOSS projects behave. According to West and O’Mahony (2005), one of the key moments of commercial FOSS is the start-up phase: when the project code is opened, “an infant community is presented with a large complex system that may be harder to decipher”, thus the FOS-ERP creator may have to wait until contributions from other firms become viable and also advantageous - the main economic incentive for firm participation is the emancipation from the price and license conditions imposed by large software companies (Wang and Chen, 2005), but the potential for doing such substitution must be only latent in the project. The same type of incentive is identified by Riehle (2007), who stats that solution providers can take advantage from open source software “because they increase profits through direct costs savings
102
and the ability to reach more customers through improved pricing flexibility”. Other aspect on the vendor side identified by these authors is that opening the code can reduce the costs on software testing and Research & Development tasks. These advantages are stimulating a better market acceptance, according to a Zinnov (2006) study, which, among other things, shows figures on the raising of venture capitalism participation and the higher penetration in the US enterprise systems market by open source solutions. Furthermore, Goth (2005) affirms that the open software market is on a “second wave” towards enterprise software, and that FOSS business models are finally ready for facing this new market challenge. These two last references points to a general improvement on the relation between FOSS communities and enterprise systems users. Despite the differences, FOS-ERP and P-ERP certainly have one thing in common: both have a company behind their deployment activities. Although there exist FOS-ERP maintained almost solely by communities formed basically by individuals, like GNU Enterprise, it seems that only company-sponsored FOS-ERP, such as Compiere, ERP5, OpenMFG, and SQL Ledger, are really successful. In other words, FOS-ERP are typically of the commercial open source kind, which “a for-profit entity owns and develops”, according to Riehle (2007) classification. The next topics show how FOS-ERP differs from P-ERP, present opportunities and challenges that this kind of software offer to developers and adopters, and finally drawn some conclusions on the subject.
dIFFerences betWeen Fos-erP And P-erP The fact that FOS-ERP expose their code forces vendors and adopters processes to accommodate the consequence that customization and maintenance can be done by other than the vendor. This fact means that the adopter is free to choose the
Free and Open Source Enterprise Resources Planning
participation level of the vendor in the different phases of the ERP life cycle – meaning that, at some extent, vendor participation can be also customized4. Analyzing the differences between open source and proprietary ERP depend on side of the commercial relation the organization is: adopter or vendor.
differences for the Adopter Selecting an ERP for adoption is a complex process, because, besides the size of the task, it is an important enterprise component that impacts the adopter organization in financial and in selfknowledge terms. Therefore, it is important to use a framework to understand how open source alternatives can impact this kind of project. The Generalized Enterprise Reference Architecture and Methodology (GERAM) is a well-known standard that provides a description of all elements recommended in enterprise engineering and a collection of tools and methods to perform enterprise design and change with success (IFIP– IFAC, 1999), providing a template life cycle to analyze FOS-ERP selection, deployment, and evolution. GERAM
defines seven life-cycle phases for any enterprise entity that are pertinent during its life. These phases, presented on Figure 1, can be summarized as follows: a.
b. c.
d.
e.
f.
g.
Identification: identifies the particular enterprise entity in terms of its domain and environment. Concept: conceptualizes an entity’s mission, vision, values, strategies, and objectives. Requirements: comprise a set of human, process, and technology oriented aspects and activities needed to describe the operational requirements of the enterprise. Design: models the enterprise entity and helps to understand the system functionalities. Implementation: the design is transformed into real components. After tested and approved the system is released into operation. Operation: is the actual use of the system, and includes user feedback that can drive to a new entity life cycle. Decommission: represents the disposal of parts of the whole entity, after its successful use.
Figure 1. GERAM life cycle phases. The design phase is subdivided into preliminary and detailed design.
103
Free and Open Source Enterprise Resources Planning
Except for decommission and identification, which are not influenced by licensing models, these phases can be used to better understand how FOS-ERP differs from P-ERP, providing key aspects for evaluating alternatives and successively refining objectives, requirements and models, as next subtopics address.
concept During this phase, high-level objectives are established, such as the acquisition strategy, preliminary time and cost baselines, and the expected impact of ERP adoption. In the case of FOS-ERP, the level of involvement of the adopter in development can be established. In other words, at this point the adopter can start considering the possibility of actively contributing to an open source project, becoming a prosumer. Of course, this decision will be possible only during the more advanced phases, when the adopter better knows the solution requisites and the decision alternatives.
requirements and Preliminary design Taking as a principle that most software development (and customization) today is done through interactive and incremental life cycles, it can be considered that there is no clear borderline between the requirements and preliminary design phases and between the detailed design and implementation phases, thus they are considered together in this analysis. The requirements phase deals with system’s functional and non-functional requirements. The adopter may model some main business processes – part of the Preliminary Design – as a way to check how the alternatives fit to them. At this point FOS-ERP starts to differ more from P-ERP. Evaluating P-ERP involves comparing alternatives under the light of functionality, Total Cost of Ownership (TCO), and technological
104
criteria. For FOS-ERP these criteria and others related specifically to FOSS must be also taken into account– remembering that even if the implementation represents a smaller financial impact, in terms of a company’s self-knowledge it can assume a much bigger importance, since it holds not only a inventory of records and procedures, but also how those records and procedures are realized in technological form – through source code. In other words, a FOS-ERP can have a smaller financial impact but a much bigger knowledge and innovation impact. Although P-ERP are also highly parameterized, and adaptable through APIs and/ or dedicated programming languages, the access to the source code in FOS-ERP can drive much better exploration of the ERP’s capabilities, thus allowing a better implementation of differentiated solutions. From this standpoint, the strategic positioning of an adopter in relation to a FOS-ERP seems to be of greatest importance, given the possibility of deriving competitive advantage from the source code. Therefore, the adopter must decide to behave as a simple consumer, only getting the solution from the vendor, or become a prosumer, by mixing passively purchasing commodity parts of the system with actively developing strategic ones by itself. Thus it is clear that when an adopter considers FOS-ERP as an alternative, it should also consider developing parts of it to fit its requirements – taking into account that, as said before, this kind of positioning involves allocating managerial and technical resources for development tasks in a FOSS environment.
detailed design and Implementation The detailed design phase focus on refining models, and is associated to business process modeling and parameter identification and value definition. The implementation phase concentrates on validating, integrating modules, and releasing its modules for initial use.
Free and Open Source Enterprise Resources Planning
If the adopter decided to participate actively in the selected FOS-ERP project, deeper design decisions are involved, such as creating entire new modules or extending the basic framework. A consequence of assuming a more active role is to invest more human and financial resources for learning the FOS-ERP platform and framework, developing and maintaining parts of it, and managing the relationship with the project community. In that case, customization and maintenance contracts must define responsibilities of each part on the deployment process. For instance, what the vendor should do if the adopter finds a bug in the original code, written by the first, which is being adapted by the second? What is the priority that the vendor must follow for correcting this bug? Actually, is the vendor responsible for correcting this bug, since for this part the adopter decided to take advantage of the solution’s free license, therefore exempting the vendor of responsibility for the bug? The adopter has the option of assuming different grades of involvement for each phase. For ordinary modules, like payroll, the adopter can let the vendor do the work. However, for strategic modules, where the adopter believes that it holds competitive advantage in the related business processes, it can take an active role from detailed design to implementation and maintenance, to be sure that the business knowledge, or at least the more precious details that keep the competitive advantage, will be kept in the adopter company. In that situation the vendor is limited to act as a kind of advisor to the adopter. One can think that it is possible to keep secrecy on parts of the system by properly contracting a P-ERP vendor, which is true, but the adopter will become dependent of the vendor in a strategic part of the system. Becoming dependent means to wait for other vendor’s priorities or pay a high price to become the priority when changes are needed. Even if the P-ERP adopter decides to develop these highstrategic parts, it will have to deal with licensing costs anyway.
A very interesting point is the openness of parts customized for and sponsored by a specific adopter. Maybe the adopter doesn’t want to become a developer at all – which is most likely to happen, but it still wants to keep some tailored parts of the system in secret. In these cases, the vendor must adapt the licensing terms for its solution, so that general openness of the code is guaranteed, while some client-sponsored customized parts can be kept closed5.
operation During the operation phase the resources of the entity are managed and controlled so as to carry out the processes necessary for the entity to fulfill its mission. Deviations from goals and objectives or feedbacks from the environment may lead to requests for change; therefore during this phase system maintenance and evolution occur. During operation the adopter can decide at any moment, unless contractual clauses hinders, to shift to another vendor or to assume the system’s maintenance by itself. Minor changes can also be conducted by the own adopter or even by community individuals that may help on specific matters. As a conclusive remark on the differences of FOS-ERP on the adopter side, experience has shown that most of the times the adopter will not get involved on customization or even maintenance tasks. Still, FOS-ERP can be a good choice, since it reduces vendor dependency. Moreover, the openness of code on FOS-ERP also makes adapting it to specific needs easier, thus reducing costs in customization and further evolution of the software. In other words, the central points to consider are cost reduction and freedom of choice. Last but not least, as a general rule, FOS-ERP also rely on other open technologies. For instance, while most P-ERP systems export and import data to and from MSOffice, FOS-ERP, like ERP5, interact with the also free Open Office. The same is truth for databases and operational systems – thus reducing licensing costs on ERP supportive software too.
105
Free and Open Source Enterprise Resources Planning
differences for the vendor The FOS-ERP vendor business models are a consequence of the customer’s freedom of choice and of the general open source market characteristics. Like in other types of FOSS, if on one hand vendors benefit from the community improvements and testing work, on the other hand they face the competition of this community when dealing with deployment and maintenance. In fact, as previously shown, even an adopter can become a competitor, at some extent of course. It is important to note that there are three types of vendor: the original system creator, its partners, and free-lance vendors. In the case of partners, a formal, most of times contractual, agreement is set between them and the system creator. This agreement involves some responsibilities to the partner, in special, following creator’s deployment practices, communicating new business generated by the system, opening the source code of new and improved parts of the system, and helping in development tasks managed by the creator. Freelance vendors are free of these obligations, and as a consequence, have no special treatment by the creator, that expects that the free-lancer at least
open the code of its own system improvements, following the general FOSS code of ethics6. Following the common reasoning about FOSS pricing, FOS-ERP vendors can take advantage from open source software because, according to Riehle (2007), FOSS “increase profits through direct costs savings and the ability to reach more customers through improved pricing flexibility”, as shown on Figure 2. Figure 2 shows a situation that maybe is more applicable to partners and free-lance vendors of FOS-ERP, which can switch from more expensive proprietary software to less expensive open source software, thus potentially increasing their profit margin. The creator organization must know how to manage the community around the project, finding prospective partners, hiring individuals that become highly productive on the project whenever possible, and even trying to transform free-lancers into partners. Like their proprietary counterparts, FOS-ERP needs a network of partners that can help on deployment projects where the creator has no conditions to be the main contractor, and finding new markets and customers for the system. However, gathering contributors for a starting FOS-ERP project can be a hard task.
Figure 2. Sales margins and number of customers; (a) The lower price limit determines the customers the system integrator takes on; (b) Switching from closed source software to open source software can result in more customers and higher profits (Source: Riehle, 2007, with permission)
106
Free and Open Source Enterprise Resources Planning
As said before, ERP users are organizations, not individuals, and therefore the creator must learn how to attract partner firms that are willing to contributing to the project without becoming competitors. As a main conclusion, FOS-ERP vendors must fight hard to form a community around the project and to retain customers. This seems to be a big difference between the open and proprietary licensing models, since the risk of the vendor loose a client after deployment is almost inexistent in the current P-ERP dominated market landscape, where global players dictate market rules in practice. The differences between FOS-ERP and P-ERP can led to a shift from the vendor-dominated perspective of P-ERP to a more customer-driven FOS-ERP perspective. These differences in conducting selection, adoption, and selling also bring a series of opportunities and challenges for both vendors and adopters, which are addressed in the following topics.
oPPortunItIes And cHAllenGes FOS-ERP offer a series of opportunities for actors that are currently out or ill inserted into the ERP market. These opportunities come together with a series of challenges, as listed below. For smaller consulting firms: a.
Opportunities: P-ERP vendors generally impose high costs and a rigid set of rules for firms that desire to enter their partner network, raising the difficulties for smaller firms to become players in this market. In contrast, smaller consulting firms can enter the FOS-ERP market in an incremental way, increasing their commitment to a project as new business opportunities appear and bring more financial income. In other words, firms can start contributing with small improvements to the project as a way of gaining
b.
knowledge on the system platform and framework, and, as customers to the solution appears, more money can be invested on a growing commitment to the project. Additionally, with the raising of venture capitalism investment on FOSS startups, a smaller firm can even get financed in a way that would be very unlikely to happen if it worked on top of a P-ERP solution, given the restrictions imposed by the global players. Challenges: If on one hand it is easier to enter the market, on the other it is harder to retain clients: a broader consultancy basis empowers the demand side, making customers more demanding.
Keeping quality level among a heterogeneous network of consulting services providers is also a major challenge. FOS-ERP in general lack certification and quality assurance programs that guarantee service levels to clients. Moreover, FOS-ERP skeptics argue that few reliable consulting firms have experience on implementing them. But exactly those programs keep smaller consulting firms way from P-ERP, pushing them towards FOS-ERP. For a small consulting firm, a possible solution to this deadlock is to start with smaller, less demanding projects, and then go towards bigger ones, as the deployment processes and related activities gain maturity. This maturity will become the competitive advantage of the firm on a high competitive FOS-ERP market. For smaller adopters: a.
Opportunities: lower costs open new opportunities for Small and Medium Enterprises (SME) to become ERP adopters. With globalization, small firms suffer more and more with competition, and when they try to modernize their processes, they hit the wall of global players’ high costs, or have to adopt smaller off-the-shelf (and also proprietary) solutions that ties them to a single supplier that normally doesn’t have a partner network.
107
Free and Open Source Enterprise Resources Planning
In contrast, FOS-ERP are less expensive and support can be found in different ways, including individuals and other SME. This is also truth for local governments and countries in development in general. FOS-ERP reduce costs, thus helping local governments focus on their core business – directly taking care of citizens – and reducing technological dependency from global players. In fact, FOSS in general is an opportunity for countries in development to shift from buyers to players in the software industry (Ouédraogo, 2005).
information on their internal features and development processes, on the other hand it is harder to get information from a distributed set of partners that sometimes carry informal agreements. Social and economical aspects, like reward structures, must be taken into account to understand the dynamics of FOS-ERP, like in every FOSS, bringing more components to be analyzed. For individuals: a.
Challenges: lower costs can also mean that adopters have to deal with lower service levels, then stressing the necessity of carefully evaluating FOS-ERP options and the maturity of their supportive services. Actually, as said before, consulting certification is yet on the early stages for FOS-ERP, thus quality of service must be carefully addressed during contract negotiation.
b.
For researchers: a.
b.
108
Opportunities: The author has been contributing to a FOS-ERP project7 since its conception. During this time it was possible to know deeply, and sometimes take part of, all the process that compose an ERP solution, from conception and development, to business models, deployment, operation and maintenance, and evolution. This is a really good opportunity, since most research papers on ERP are related to deployment and operation, given that P-ERP companies don’t usually open their projects’ internals for researchers. Smaller research groups can find their way in this area by getting associated to a FOS-ERP project, and contributing to specific parts of it. Challenges8: If on one hand the openness of FOS-ERP may give researchers more
b.
Opportunities: FOS-ERP represent an unique opportunity for an individual to install an ERP framework and understand its internals. It is the chance of participating in a big software development project without being an employee of a big company (Spinellis, 2006). Also, the developer can incrementally gain knowledge of the system, and get free support from the community, without the necessary investment on the P-ERP high cost training and certification programs. In that way an individual can improve his/her employability without investing too much money on courses, books and certifications. In the future, these advantages can make more free-lance developers enter FOS-ERP communities, currently formed mostly by companies’ employees. Challenges: learning the internals of a FOSS in general means to spend considerable time in understanding system architecture, design decisions, and specific features. Moreover, FOS-ERP in special currently lack books and courseware in general to help accelerating the learning process, and many times the individual must count on Web sites, mailing lists, discussion forums, and the good will of community members to acquire deeper knowledge on the framework.
Free and Open Source Enterprise Resources Planning
conclusIon
reFerences
In this chapter Free/Open Source ERP particularities, opportunities and challenges were briefly presented. It is important to note that this type of software inherits all advantages and shortcomings of open source software in general and have some more of both. As a matter of fact, FOS-ERP two main advantages are both directly related to FOSS: lower TCO given the reduced or nonexistent licensing costs - including of supportive software, like spreadsheets, databases, networking and operational systems; and the possibility of having direct access to the source code whenever is needed. Nevertheless, despite the growing interest on this subject, it still has many topics to be explored by researchers and practitioners, given the short period of time that passed since this kind of software appeared in the market and the relatively small number of users, which indicates that the list of opportunities and challenges aforesaid is a reflection of current tendencies that must be confirmed and better scrutinized as new deployments occur. For instance, currently there are no research figures on FOS-ERP success rates. In other words, in this arena, it is necessary that more data on deployment, customization, operation, and evolution become available so that tendencies may be confirmed and can become facts and figures. Hence, as a relatively new kind of software, FOS-ERP has a potential to be realized, but many questions about it are yet to be answered. Nevertheless, their growing commercial acceptance is a fact, and their lower costs, easier adaptation, and potentially more competitive supplier market can slowly force a shift in the ERP market from the current vendor perspective to a customer perspective.
Alshawi, S., Themistocleous, M., & Almadani, R. (2004). Integrating diverse ERP systems: A case study. The Journal of Enterprise Information Management, 17(6). Emerald Group Publishing Limited, pp.454-462. Botta-Genoulaz, V., Millet, P.-A., & Grabot, B. A. (2005). Survey on the recent research literature on ERP systems. Computers in Industry, 56, 510–522. doi:10.1016/j.compind.2005.02.004 Caulliraux, H. M., Proença, A., & Prado, C. A. S. (2000). ERP Systems from a Strategic Perspective. Sixth International Conference on Industrial Engineering and Operations Management, Niteroi, Brazil. Crowston, K., & Howison, J. (2006). Assessing the Health of Open Source Communities. IEEE Computer, May, 89-91. De Carvalho, R. A. (2006). Issues on Evaluating Free/Open Source ERP Systems. Research and Practical Issues of Enterprise Information Systems, 667-676. Springer-Verlag Dreiling, A., Klaus, H., Rosemann, M., & Wyssusek, B. (2005). Open Source Enterprise Systems: Towards a Viable Alternative. 38th Annual Hawaii International Conference on System Sciences, Hawaii. Goth, G. (2005). Open Source Business Models: Ready for Prime Time. IEEE Software, (November/December): 98–100. doi:10.1109/ MS.2005.157 Herzog, T. (2006). A Comparison of Open Source ERP Systems. Master thesis, Vienna University of Economics and Business Administration, Vienna, Austria. IFIP – IFAC Task Force on Architectures for Enterprise Integration. (1999). GERAM: Generalized Enterprise Reference Architecture and Methodology, 31
109
Free and Open Source Enterprise Resources Planning
Kim, H., & Boldyreff, C. (2005). Open Source ERP for SME. Third International Conference on Manufacturing Research, Cranfield, U.K. Kooch, C. (February 01, 2004). Open-Source ERP Gains Users; http://www.cio.com/archive/020104/tl_open.html LeClaire, J. (December 30, 2006). Open Source, BI and ERP: The Perfect Match?http://www. linuxinsider.com/story/LjdZlB0x0j04cM/OpenSource-BI-and-ERP-The-Perfect-Match.xhtml Ouédraogo, L.-D. (2005). Policies of United Nations System Organizations Towards the Use of Open Source Software (OSS) in the Secretariats. Geneva, 43p. Riehle, D. (2007). The Economic Motivation of Open Source Software: Stakeholder Perspectives. IEEE Computer, 40(4), 25–32. Serrano, N., & Sarrieri, J. M. (2006). Open Source ERPs: A New Alternative for an Old Need. IEEE Software, (May/June): 94–97. doi:10.1109/ MS.2006.78 Smets-Solanes, J., & De Carvalho, R. A. (2003). ERP5: A Next-Generation, Open-Source ERP Architecture. IEEE IT Professional, 5(4), 38–44. doi:10.1109/MITP.2003.1216231 Spinellis, D. (2006). Open Source and Professional Advancement. IEEE Software, (September/ October): 70–71. doi:10.1109/MS.2006.136 Wang, F.-R. He, D., & Chen, J. (2005). Motivations of Individuals and Firms Participating in Open Source Communities. Fourth International Conference on Machine Learning and Cybernetics, 309-314. West, J., & O’Mahony, S. (2005). Contrasting Community Building in Sponsored and Community Founded Open Source Projects. 38th Annual Hawaii International Conference on System Sciences, Hawaii.
110
Xu, N. (2003). An Exploratory Study of Open Source Software Based on Public Archives. Master Thesis, John Molson School of Business, Concordia University, Montreal, Canada. Zinnov Research and Consulting. (2006). Penetration of Open Source in US Enterprise software market – An Overview. 37.
key terms ERP: Enterprise Resources Planning, a kind of software which main goal is to integrate all data and processes of an organization into a unified system. ERP Business Models: Broad range of informal and formal models that are used by a vendor to make profit from an ERP system deployment, customization, and maintenance. ERP Evaluation: Process of selecting an ERP package, among various alternatives and in accordance to business processes, information, technology, and strategic requirements. Free Software: According to the Free Software Foundation, is a Software that gives to the user the freedom to run the program for any purpose, study how the program works and adapt it to his/her needs, redistribute copies, improve the program, and release his/her improvements to the public, so that the whole community benefits. Free/Open Source ERP: ERP systems that are released as Free Software or Open Source Software. Free/Open Source Software Adopter Types: According to Xu (2003) it is possible to classify a software user company in accordance to its positioning in relation to a FOSS: Consumer: a passive role where the adopter will just use the software as it is, with no intention or capability of modifying or distributing the codes, Prosumer an active role where the adopter will report bugs, submit feature requests, post messages to lists. A more capable Prosumer will also provide bug
Free and Open Source Enterprise Resources Planning
fixes, patches, and new features. Profitor: a passive role where the adopter will not participate in the development process but simply will use the software as a source of profits. Partner: an active role where the adopter will actively participate in the whole open source development process for the purpose of earning profits. Open Source Software: According to the Open Source Initiative, licenses must meet ten conditions in order to be considered open source licenses: 1. The software can be freely given away or sold. 2. The source code must either be included or freely obtainable. 3. Redistribution of modifications must be allowed. 4. Licenses may require that modifications be redistributed only as patches. 5. No Discrimination Against Persons or Groups. 6. No Discrimination Against Fields of Endeavor. 7. The rights attached to the program must apply to all to whom the program is redistributed without the need for execution of an additional license by those parties. 8. The program cannot be licensed only as part of a larger distribution. 9. The license cannot insist that any other software it is distributed with must also be open source. 10. License Must Be TechnologyNeutral The official definition of Open Source Software is very close to the definition of Free Software, however, it allows in practice more restrictive licenses, creating a category of “semifree” software.
2
3
4
5
6
endnotes 1
The precise definitions of Free Software and Open Source Software are on the chapter’s Key Terms list. Although there are differences between them – the Free Software
7 8
Movement has also some political connotations for instance - for the goals of this work the two terms will be treated as synonyms. The IFIP First International Workshop on Free/Open Source Enterprise Information Systems/ERP, held during the First IFIP TC8 International Conference on Research and Practical Issues of Enterprise Information Systems – CONFENIS 2006. Despite the fact that these methods hold some similarities, they were developed without having knowledge of each other and were published in a very close range of time: Carvalho’s method was published in April and Herzog’s in June, 2006. This is a generic assumption, since in practice the vendor can impose specific license terms that keep the software open, but constrain deployment to terms that keep vendor control on it. Although this seems to be nonsense in FOSS terms, it is a common real life situation in FOS-ERP. In fact, the author knows a case where an adopter company sponsored the whole development of an FOS-ERP during a three-year period, without becoming a prosumer, and keeping only a specific algorithm, related to its product pricing schedule, in secret. The original license had to be changed to fit this customer demand. In practice, this return from free-lancers doesn’t happen all the times. ERP5 http://www.erp5.com The author considers that these challenges represent, in fact, new research opportunities.
This work was previously published in Handbook of Research on Enterprise Systems, edited by Jatinder N. D. Gupta, Sushil Sharma and Mohammad A. Rashid, pp. 32-44, copyright 2009 by Information Science Reference (an imprint of IGI Global).
111
112
Chapter 1.8
E-Government and ERP: Challenges and Strategies
Gita A. Kumta SVKM’s NMIMS University, School of Business Management, Mumbai, India
AbstrAct The chapter introduces the essence of ERP in government as a tool for integration of government functions which provides the basis for citizen services. It discusses the challenges faced in modernization of government “businesses” and discusses strategies for implementation. The basis of Enterprise Resource Planning (ERP) solutions is integration of functions which capture basic data through transactions to support critical administrative functions such as budgeting and financial management, revenue management, supply chain management and human resources management. Today, Enterprise solutions (ES) go beyond ERP to automate citizen-facing processes. The integration of data sources with each contact point is essential to ensure a consistent level of service. The author DOI: 10.4018/978-1-59904-859-8.ch025
expects that researchers, governments and solution providers will be able to appreciate the underlying constraints and issues in implementation of ERP and hopes that the learning from industry would be useful to plan implementation of ES in government using emerging technologies.
IntroductIon ERP provides an enterprise-wide view of an organization and integrates various silos of activity. Such an integrated approach has a tremendous payback if implemented properly. Most ERP systems were designed to be used by manufacturing companies to track men, machines and material so as to improve productivity and reduce inventory. Viewing it from a business perspective, ERP systems are now known as Enterprise Solutions (ES) which takes a customer order and provides a software road map
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
E-Government and ERP
for automating the different steps along the path to fulfilling the order. The major reasons why companies look at ES can be summarized as: • • • • •
Integrate financial information Integrate customer order information Standardize and speed up operational processes Reduce inventory Standardize HR information
Governments worldwide have been making efforts to use information and communications technologies (ICT) as an instrument of change to provide better services to citizens, facilitate work flow, and provide better governance and transparency. Popularly known as E-Government, the focus has initially been on information dissemination which has now moved on to transactions. What is required is a transformation of the public administration which takes a citizen service request and provides a software road map for automating the different steps along the path to fulfilling the request. This cuts across various departments and it is therefore critical to lay down suitable policies, guidelines and specifications and also redefine processes to facilitate faster proliferation of ICT applications. E-government does not happen with more computers or a website. While online service delivery can be more efficient and less costly than other channels, cost savings and service improvements are not automatic. E-government has therefore to focus on planning, sustained allocation of budgets, dedication of manpower resources and above all, the political will. The e-government field, like most young fields, lacks a strong body of well-developed theory. One strategy for coping with theoretical immaturity is to import and adapt theories from other, more mature fields. (Flak, Rose, 2005) Literature survey on implementations of e-governance has brought out the following observations which would help us in redefining the
use of Information & Communication Technology (ICT) in the right perspective. •
•
•
•
Most governments have not changed their processes in any way, and instead have automated flawed processes. Government budgets and administration tends to be in departmental silos, but egovernance cuts across departments. Too much attention to “citizen portals” has taken attention away from internal government functioning. There is a big gap between a web site and integrated service delivery. Governments often underestimate the security, infrastructure, and scalability requirements of their applications which impact the quality of service. (Khalil, Lanvin, Chaudhry, 2002)
Learning from the experiences of the corporates, governments today understand the need for a consistent and flexible information infrastructure that can support organizational change, cost-effective service delivery and regulatory compliance. ERP is therefore needed to meet organizational objectives and outcomes by better allocating resources - its people, finances, capital, materials, and facilities. Modernization programs however involve a broad range of activities and require a wide array of skills and experiences, as these programs affect everything from computers to culture. The objective is to reduce administrative overhead and improve core product/service delivery.
essence oF erP In Government Before moving on to the ERP discussion it is necessary to dwell a little on various aspects of E-government which is about transforming the way government interacts with the governed. The E-Government handbook for developing countries identifies three major phases –Publish,
113
E-Government and ERP
Interact and Transact. These however are not sequential phases and hence can be considered as major aspects of e-government. In short, egovernment utilizes technology to accomplish reform by fostering transparency, eliminating distance and other divides, and empowering people to participate in the political processes that affect their lives. (Khalil, Lanvin, Chaudhry, 2002). Each of the phases is briefly summarized below. •
•
•
114
Publish: Governments generate and also publish in print large amount of information which can be disseminated to the public using ICT. Some of these cases are ◦ E-Government Portal of Canada is considered as one of the best government portals in the world. http:// www.canada.gc.ca. ◦ The JUDIS (Judgment Information System) in India posts court records, case information and judicial decisions. http://indiancourts.nic.in/itinjud.htm Interact: Interactive e-government involves two-way communications, starting with basic functions like email contact information for government officials or feedback forms that allow users to submit comments on legislative or policy proposals. Some of the cases are ◦ Citizen Space. A section of the British Government’s web portal allowing citizens to comment on government policy. http://www.ukonline.gov.uk ◦ The Central Vigilance Commission in India allows citizens to file online complaints about corruption. http:// www.cvc.nic.in/vscvc/htm Transact: Just as the private sector in developing countries is beginning to make use of the Internet to offer e-commerce services, governments will be expected to do the same with their services. A transact website will offer a direct link to
government services, available at any time. Some of the cases are ◦ The Bhoomi Project. Delivery of land titles online in Karnataka, India... http://www.revdept-01.kar.nic.in/ Bhoomi/Home.htm ◦ The Government E-Procurement System in Chile...http://www.compraschile.cl/Publico/entrada_publico. asp To achieve this it is essential to look at the internal processes of the government, the relationships between various departments, sharing of information between departments and the IT infrastructure required to support these aspects. The following section indicates the characteristics of content, process, people and technology required by government to implement the various initiatives, thereby identifying the essence of ERP in government (refer to Table 1). The E-Government handbook for developing countries has many more case studies listed which highlight these phases. (Khalil, Lanvin, Chaudhry, 2002). Enterprise modernization is therefore a complex, ongoing evolutionary process that involves the integrated transformation of strategies, policies, organization and governance structures, business processes and systems, and underlying technologies. Only by aligning these elements with its business goals can an agency achieve a successful modernization program. (Kirwan, Sawyer, and Sparrow, 2003). The very essence of enterprise modernization is integration of processes and connectivity of stakeholders. It therefore involves connecting: •
•
Government to government (G2G):Departmental integration of processes ERP Government to businesses (G2B): Information to suppliers and procurement SCM
E-Government and ERP
Table 1. Requirements for e-government initiatives
• •
Phase
Content
Process
Publish Focus: centralization of content
Existing documents in terms of rules and regulations, documents, and forms
Simple process of capture and monitoring.
Small project team with skill focus on IT. Involvement of department staff minimal. Minimal technical support for usage.
Basic IT infrastructure, storage and web services – Portal. Batch mode. Can be totally outsourced.
Interact Focus: Involvement of citizens
Grievances, suggestions, feedback
Process of coordination with departments and communication on status of content.
Small internal teams with skill focus on communication and people management. Moderate technical support for usage.
E-mail facility and collaborative systems. On-line and batch mode Can be partially outsourced.
Transact Focus: Direct link of citizens to government services.
Integration of functional processes Service flow from application to service delivery. Covers forms, logic of computation, controls, operational and legal policies.
Process of data capture through transaction, validation, processing using defined logic. Focus on managing business processes.
Large Project teams. Skills in communication, people management. And functional knowledge. Extensive technical support for usage.
Large and robust IT infrastructure. Storage, internal network, ERP solution, web services and e-commerce. Online, real-time mode. Cannot be outsourced. Requires technology partnerships ICT policies.
Government to citizens (G2C): information and service to citizens CRM Government to employees (G2E): information and service to employees ERP
As citizens grow in awareness, governments today are under increasing pressure to deliver a range of services – from ration cards, motor licenses and land records to health, education and municipal services – in a manner that is timely, efficient, economical, equitable, transparent and corruption-free. For any government that is keen to respond to this demand and to hasten the pace of development –information technology comes as an excellent tool. However technocratic responses in themselves are not a solution. They are a tool. A solution is one that is holistic and describes how the tool can be feasibly deployed. (Carter, 2005). ERP solutions in government should therefore provide horizontal components, which are relevant to every part of a government and should support vertical integration needed for the delivery of specific services. Such solutions would facilitate governments to offer cross-functional services to
People
Technology
citizens which no longer need to be restricted to the structure of the government. Tremendous potential exists for rethinking of the business of government to reduce cost and improve the quality of government/constituent interactions. Most so-called “e-governance” initiatives have been simply focused on internetenabling old processes and systems which have resulted in a series of costly, overlapping, and uncoordinated projects. Transformation is possible only when one examines the inter-relationships between government agencies – both processes and systems which would result in true efficiencies of E-Business. “The benefits come from changing your business processes, not from installing ERP,” says Bill Swanton, a vice president at AMR Research in Boston. Adds Buzz Adams, president of Peak Value Consulting that specializes in process improvement, “The technology will work the way you implement it, so what’s important is how you improve the processes — the way you do things.” (Bartholomew, 2004).
115
E-Government and ERP
FunctIonAl model oF PublIc AdmInIstrAtIon Enterprise resource planning (ERP) systems are becoming pervasive throughout the public sector for their support of critical administrative functions such as budgeting and financial management, procurement, human resources (HR), business process, and customer relationship management (CRM). ERP solutions for financial management cover accounts payable, accounts receivable, fixed asset accounting, cash management, activity-based and project accounting, cost-centre planning and analysis, financial consolidation, financial reporting, and legal compliance and reporting. Budget management solutions support economic policy, collection and analysis of budget and economic data, economic forecasting, revenue allocation to government agencies, budget approval, and budget monitoring. The Public Administration Functional Model can be diagrammatically depicted as shown in Figure 1. Tax and revenue management also fall under the financial and budget management umbrella. This includes both tax filing and tax case management. Tax filing covers taxpayer identification and Figure 1. Public Administration Functional Model
116
registration, tax assessment, management of online or offline revenue collection and reimbursements, secure processing, and payment. Tax case management involves investigation and enforcement, revenue auditing, and analysis. Supply chain management (SCM) functions cover cataloguing, bidding and negotiation, and tendering with workflow support for preparation of tender requests, invoicing and payment, purchasing and reporting. Logistics support in the areas of material and equipment planning, inventory, warehouse management, transportation and distribution, and the maintenance, repair, and disposal of assets is another major area of public administration. Human resource management covers civil service recruitment, individual and collective training, performance assessment, career development, payroll and benefits, and travel and the movement of public employees. Finally citizen-facing processes within an organization through Web-portals and call centers as well as traditional over-the-counter services require integration of data sources with each contact point to ensure a consistent level of service. Traditionally processes were people driven and the emphasis of ERP was on automation. Having
E-Government and ERP
achieved the basic requirements, organizations expected to achieve efficiency and control. Today the expectation from an ERP system is strategic support adding value to business. Collectively, all the functionalities of publishing, interacting and transacting now need to be aligned to manage the entire government business. ERP combines them all together into a single, integrated software solution that operates on a single database so that the various departments can more easily share information, communicate with each other and provide end-to-end service to its stakeholders.
cHAllenGes In ImPlementAtIon oF Ict solutIons E-governance initiatives are common in most countries as they promise a more citizen-centric government and reduce operational cost. Unfortunately most of these initiatives have not been able to achieve the benefits claimed. Often the reason for this failure is a techno-centric focus rather than a governance-centric focus (Saxena, 2005). The following challenges faced by organizations are equally applicable to governments.
Acquisition Process All government ICT and modernization programs face major challenges, one of which is the acquisition process. This process requires a holistic view of the public administration for identifying and evaluating the right solution, taking decisions regarding outsourcing and managing the whole implementation as a project. It’s not very difficult to buy software and hardware that fulfill the needs of a single division without considering the needs of the entire organization. However, if each division of an organization develops its own business processes and IT infrastructure, the result will be lack of interoperability, duplicated components, functional gaps, and inability to share information. Though external consultants do provide sup-
port, the challenging task is creating an IT organization structure to support this process within the government structure.
High expectation from citizens Citizens themselves are becoming increasingly demanding as they compare government services with other services like banking. Citizens do not want to be transferred from department to department for an answer to a simple telephone query and be forced to queue at public offices to make a straightforward transaction. They demand facilities to be offered round the clock. They expect high standards of service, instant access to information, efficient transactions, and support, whenever and wherever they need it.
Integration of various Functions Today it is the era of packaged software which in most cases is exhaustively comprehensive. The organization therefore needs to change its processes to align it to ‘best practices’ incorporated in the software package. However in the excitement to implement such solutions one often loses sight of the fact that each department has its own policies and rules that make it unique. “For integration to work, your internal systems must be working properly,” says Mary Kay DeVillier, director/cbusiness and information resources at Albemarle (Mullin, 2000).
requirement of clean data Organizations embarking on an ERP initiative must take care not to underestimate the amount of work needed to develop a clean set of master data. The chart of accounts, citizen data, policies & norms and other mission-critical information has to be accurate from the start or mistakes will multiply exponentially throughout the system. Many an ERP project has been scuttled or gone south altogether because companies failed to do
117
E-Government and ERP
this kind of basic blocking and tackling early on (Bartholomew, 2004).
standards for Interoperability In order to support these demands the internal functions of supply chain management assume greater importance. There is tremendous importance for rules and procedures in the government. Unless the records are kept properly, accessing information and tracing the precedents becomes time consuming and this is one of the reasons for the delays in government administration. The rules and procedures can be made transparent to the citizens, and traceability can be incorporated which would improve the pace of effectiveness of governance by using Information Technology (Budhiraja, 2003). With multiple players and departments increasingly becoming involved in the e-Government initiatives, standards for e-Government have become an urgent imperative for interoperability. Enterprise solutions would therefore help in streamlining operations, be flexible and agile enough to respond to the demands of public sector reforms.
Information security Managing a secure environment in an era of integrated and seamless service delivery presents an increasing challenge for governments as it can be overwhelming and costly without the right infrastructure. While processing sensitive data such as citizen and financial information stored on ERP systems, government organizations want to ensure they take every possible measure to maximize security.
Project management To get the most from the software, the people inside the government have to adopt the work methods outlined in the software. Getting people inside
118
the government to use the software to improve the ways they do their jobs is by far the most critical challenge. Most ERP implementations fail due to resistance to change and poor project management (Koch). The whole implementation has to be taken up as a project clearly identifying the scope, cost, time and resource requirements with clear definition of milestones.
Hidden costs Certain costs which are more commonly overlooked or underestimated than others are training, integration, customization, testing, data conversion and data analysis which are not adequately factored into the budgets. This leads to delay in implementation which results in apathy and dissatisfaction.
ImPlementAtIon strAteGIes An ERP system helps the different parts of the organization share data and knowledge, reduce costs, and improve management of business processes. In spite of their benefits, many ERP systems fail (Stratman and Roth, 1999). Many ERP systems face implementation difficulties because of workers’ resistance (Aladwani, 2001). Effective implementation of ERP requires establishing five core competencies, among which is the use of change management strategies to promote the infusion of ERP in the workplace (Al-Mashari and Zairi, 2000). During the last few years there have been major initiatives among different Governments towards ushering in ICT and its tools in the functioning of Government. The emphasis has been on providing better services to citizens and in improving the internal productivity. Cases listed in the Report for the President’s Management Council in December 2005 is testimony to this fact (Evans, 2005). The strategies get more focused when we view the organization as an open system composed of
E-Government and ERP
interdependent components that include Strategy, Processes, Structure, Technology and Culture as mapped by Scott-Morton in the Management of the 1990s equilibrium model (Scott-Morton, 1991). This model is depicted in Figure 2. The salient aspect of this open systems model is that an impact of change on one component is immediately felt on the other components either directly or indirectly. The ERP implementation strategies can therefore be classified into organizational, technical, and people strategies. Organizational strategies would cover strategic planning, business alignment, project management and change management. The technology strategies would cover enterprise architecture, business process mapping, data capture and information security. People strategies would cover communication, managing resistance to change and ongoing support.
strategic Planning The state of Missouri in USA was the first to implement one of the largest government ERP systems which was implemented in phases and operational by 2001. The finance, budgeting and purchasing functions serve 6,000 end users, and its human resources and payroll modules serve 9,000. One of the most important keys to success
for the implementation was that all state agencies had a say in the project from its earliest stages. (Douglas, 2002). Developing a strategic plan and bringing all stakeholders on board are crucial early steps in implementing ERP”, said Ken Munson, senior principal in the State and Local Solutions Division of AMS Inc., Missouri’s ERP vendor. “The plan has to be well communicated and well bought in, not just by the different branches of government, but by all levels of each of the branches,” he said. (Douglas, 2002). The Department of Information Technology, Government of India, has felt it necessary to create a rational framework for assessing e-Governance projects on various dimensions. It is desirable that a set of instruments is available to the administrators of those projects to appreciate the various attributes of a good e-governance project, apply midcourse corrections, where needed, and steer these projects in the right direction.(Rao, etal, 2004). Many problems related to ERP implementation are related to a misfit of the system with the characteristics of the organization. (Markus et al., 2000). ERP ‘tends to impose its own logic on a company’s strategy, culture, and organization’ which may or may not fit with the existing organizational arrangements. (Davenport, 1998),
Figure 2. Management of the 1990s Equilibrium Model
119
E-Government and ERP
effective Project management Success depends on a well designed implementation plan. An ERP system is the information backbone and reaches into all areas of the business and value-chain. It is necessary to consider implementation as a project and plan the activities involving people, processes and technology. SAP’s Accelerated SAP Implementation Methodology provides a guideline for implementation. (Dejpongsarn, 2005). It focuses on five broad phases covering
tional change process, rather than as the replacement of a piece of technology. (Boonstra, 2005). Though change management strategies facilitate the success of ERP implementation, many ERP systems still face resistance, and ultimately, failure. (Aladwani, 2001). For the reform to succeed, governments need to be culturally and technically prepared to understand and implement change. Procedural and legal changes in the decision making and delivery processes as well as internal functioning are required to make a success of ERP implementation.
•
business Process mapping
•
• • •
Project Preparation: This covers the requirement of resources- people, technology and budget Business Blueprint: This covers the documentation of current processes (As is) and comparison to the solution which has best practices. A gap analysis provides a starting point to identify the changes required by the departments and new development. Realization: Set-up (configuration), testing, data migration and development Final Preparation: Training, final functional testing Go-live & Support: Different approaches are followed for going live (all modules / related modules), support through internal or external teams.
management of change Governments are using organizational change management techniques that have worked in the commercial world within their own modernization programs. (Kauzlarich, 2003). Modernization strategies, such as ERP implementation, commonly involve change. Hence, responsiveness to internal customers is critical for an organization to avoid the difficulties associated with this change (Al-Mashari and Zairi, 2000; Aladwani, 1999; Aladwani, 1998). ERP implementation should be viewed as an organiza-
120
Administrative reforms will have to precede attempts at implementation of an ERP. The emphasis will have to be on simplifying procedures, rationalizing processes and restructuring Government functions. However, this is an ideal situation but in practice changes and ERP implementation go on parallel as one cannot expect reforms to take place in a short period. The whole government reform agenda has a profound effect on government financial management. A shift of emphasis from inputs to outcomes is a key driver behind many reform initiatives. (Microsoft). These include public / private partnerships, contracting, and decentralization of service delivery to semi-autonomous agencies, competition between service providers, costrecovery, revenue generation, and the many other alternatives for service provision.
data capture The basis of an ERP system is the data captured through transactions which integrates the business processes to achieve a task. It is therefore necessary to have data standards for use across government to enable easier, more efficient exchanging and processing of data. It will also remove ambiguities and inconsistencies in the use of data. The United States Government is one of the largest
E-Government and ERP
users and acquirers of data, information and supporting technology systems in the world. The EGovernment program continuously identifies IT opportunities for collaboration and consolidation using the Federal Enterprise Architecture (FEA) framework. The framework is a comprehensive business-driven blueprint to enable the federal government to identify opportunities to leverage technology to: • •
•
•
Reduce redundancy; Facilitate horizontal (cross-federal) and vertical (federal, state and local) information sharing; Establish a direct relationship between IT and mission/program performance to support citizen-centered, customer-focused government; and Maximize IT investments to better achieve mission outcomes.
The FEA framework and its five supporting reference models (Performance, Business, Service, Technical and Data) are now used by departments and agencies in developing their budgets and setting strategic goals. Data forms the base of a transactional system and hence data integrity is essential for inter-operability. The FEA Data Reference Model (DRM) defined by the Office of Management and Budget, US, provides a “common language” for diverse agencies to use while communicating with each other and with state and local governments seeking to collaborate on common solutions and sharing information for improved services (Evans, 2005). The potential uses of the model have been summarized as: •
•
Provides a FEA mechanism for identifying what data the Federal government has and how it can be shared in response to a business/mission requirement Defines a frame of reference to facilitate Communities of Interest (which will be
•
aligned with the Lines of Business) toward common ground and common language to facilitate improved information sharing Provides guidance for implementing repeatable processes for sharing data Government-wide
enterprise Architecture If each division of an organization develops its own business processes and IT infrastructure, the end result may be lack of interoperability, duplicated Enterprise architecture components, functional gaps, and inability to share information. is required. (Tucker and Debrosse, 2003). To avoid these problems, the federal government now mandates the use of enterprise architectures (EAs) by federal agencies seeking to obtain funding for any significant IT investment. Enterprise architectures act as a kind of roadmap for the design, development, and acquisition of complex, mission-oriented information systems. Creating enterprise architecture requires participation from many areas of the organization and a great deal of communication to plan and implement each stage of the process. The result is a roadmap that guides an organization through the modernization process and enables it to achieve its goals (Tucker and Debrosse, 2003).
managing People and communication Managing the implementation requires a dedicated team with adequate skills and a clear plan and agenda. It needs to be managed as any large project with clear milestones and deliverables. The migration to an enterprise resource planning (ERP) system is often fraught with peril. Research and experience show that communication is a key mechanism for breaking down barriers to change. Employees are better able to tolerate change if they understand why the change is important and if they feel the changes are being
121
E-Government and ERP
handled with fairness and transparency. Good communication throughout the enterprise builds trust and understanding. People must see how the changes will affect the organization, the citizens and themselves (Kauzlarich, 2003). The success of implementation of ICT projects depends on attitudinal readiness of the beneficiaries for accepting change (Bowonder, Mastakar, Sharma, 2005).
knowledge management Knowledge Management is a combination of culture and technology. Culture drives knowledge management while technology enables it. The following characteristics of government based on its structure and functions drive its knowledge management needs. •
•
•
•
Knowledge which is actionable information (also known as knowledge assets) is a central resource of the government. Effective functioning of government rests on effective acquisition and dissemination of knowledge. Similar knowledge requirements spread across the states, districts, and other local governments. Transfer of people across government departments calls for a repository of knowledge which can be used wherever they move. Proactive action is required if governments want to transform themselves into “anticipatory governments” to meet the challenges of the emerging E-governance era (Misra, Hariharan, Khaneja, 2003).
emerGInG trends While ERP is concerned with the use of IT for efficient functioning of government departments, attempts are on to morph the customer relationship management concepts for creating effective service for the citizens which has given
122
birth to a new field of knowledge called Citizen Relationship Management. (Kannabiran, Xavier, Anantharaaj, 2005). It is about making better use of the considerable amounts of information that government already collects (Smith, 2003). CzRM is about becoming “citizen-centric” (Nowlan, 2001; Hunter & Shine, 2001). A citizen can be defined as a consumer of public goods and services (Nowlan, 2001). In the emerging e-Governance scenario, citizens should be treated as customers of business organizations, where serving citizens is the sole purpose of governments. Citizen Relationship Management (CzRM) is a division of customer relationship management that focuses specifically on how governmental bodies relate to their constituents (Xavier, 2002; Jha & Bokad, 2003). Post-ERP trends are geared towards increasing customer relationships and analysis of the market place for maximum profit. Data mining and Business Intelligence now are not only supporters of growth but also act as initiators of growth. Today’s organizations are looking for applications to inspire and lead the way. IT is no more an internal system of automation but is now an external means of customer communication and market analysis (Tucker and Debrosse, 2003). The services strategy entails building an integration layer that is separate and distinct from any of the software applications, including ERP. Services extract pieces of data and business logic from systems and databases and bundle them together into units that are expressed in ‘business’ terms. Implementing a service-oriented architecture can involve developing applications that use services, making applications available as services so that other applications can use those services, or both. (Ort, 2005). Service-oriented architecture (SOA) is the emerging trend in enterprise computing because it holds promise of IT becoming more agile in responding to changing business needs. Gartner reports that “By 2008, SOA will be a prevailing software engineering practice, ending the 40-year domination of monolithic software architecture”
E-Government and ERP
A service-oriented architecture is an information technology approach or strategy in which applications make use of services available in a network such as the World Wide Web. The Internet aids good governance by increasing transparency and customer-oriented service delivery. (Torres, Pina, Acerete, 2006) .By taking advantage of Internet protocols and technologies, one can minimize the need for ad hoc links between your company and your service supplier. Moreover, because the new service-based applications run inside Web browsers, your staff can connect to the service provider from various locations. Another bonus of Web-based ERP applications is that the server location is transparent. (Apicella, 2000).
conclusIon In the ultimate analysis, we find that the electronic governance wave has started worldwide. With the technologies to implement electronic governance already available/understood by the government, managerial issues are of key importance. Change in the mindset of the people particularly at the top levels in the bureaucracy and policy making is important because it is they who provide the leadership. It is important to think beyond automation, and towards redesign of the basic workflow within the government. The next generation of e- Governance will not be mere automation, but will require a reengineering of the government structurally and functionally. Enterprise modernization involves changes to all dimensions of an organization. It affects. (Kirwan, Sawyer, Sparrow, 2003). • • • • • •
Organizational structure Policies, processes, and procedures Business and technical architectures Investment management practices Governance Culture
Government agencies have the advantage of applying lessons learned from the commercial business world, based on the analysis of both failures and successes of ERP implementations. Governments today understand the need for a consistent and flexible information infrastructure that can support organizational change and meet regulatory compliance. However, one cannot expect to revolutionize the government operations with ERP as it affects mostly the existing “back office” processes. This helps in optimizing the way things are done internally which is essential for building relationships with citizens, suppliers or partners. If ERP is the focus of an effort to bring dramatic improvements to the way government functions, it will bring with it some post-ERP depression too. The most common reason for the performance problems is that people see a change in the way processes were executed earlier. When people can’t do their jobs in the familiar way and haven’t yet mastered the new way, they panic, and the operations go into spasms. A government that plans to implement ERP should expect “to be very committed, because it’s a long process,” advised Jan Heckemeyer, administrator of Statewide Advantage for Missouri (SAM) II. It is especially important to involve every agency and resolve policy questions before configuring the system. “It drags out the process a bit, trying to build consensus and work out those issues,” she said. “But it’s time very well spent.” (Douglas, 2002).
reFerences Al-Mashari, M., & Zairi, M. (2000). Information and business process equality: The case of SAP R/3 implementation. Electronic Journal on Information Systems in Developing Countries, 2 (http://www.unimas.my/fit/roger/EJISDC/ EJISDC.htm)
123
E-Government and ERP
Aladwani, A. (1998). Coping with users resistance to new technology implementation: an interdisciplinary perspective. Proceedings of the 9th IRMA Conference, Boston, MA, 17-20 May, pp. 54-9. Aladwani, A. (1999). Implications of some of the recent improvement philosophies for the management of the information systems organization. Industrial Management & Data Systems, 99(1), 33–39. doi:10.1108/02635579910249594 Aladwani,A. M. (2001). Change management strategies for successful ERP implementation. [Technical paper]. Business Process Management Journal, 7(3), 2001. doi:10.1108/14637150110392764 Apicella, M. (2000). The hand that moves your business - ERP software moves to the Web, presenting both pitfalls and opportunities.(enterprise resource planning) . InfoWorld, (June): 26. Bartholomew, D. (2004). The ABC’s of ERP. CFO IT, October 5, 2004. http://www.cfo.com/article. cfm/3171508/c_2984786?f=Technology_topstories Boonstra, A. (2005). Information Systems as Redistributors of Power Interpreting an ERP implementation from a stakeholder perspective. som.eldoc.ub.rug.nl/FILES/reports/ themeA/2005/05A06/05A06.pdf Bowonder, B., Mastakar, N.,& Sharma, K. J. (2005). Innovatiive ICT platforms: The Indian Experience. International Journal of Services Technology & Management, 6(3/4/5), 1-1. Budhiraja, R. (2003). Electronic Governance – A key issue in the 21st century. A paper by Additional Director Electronic Governance Division Ministry of Information Technology Govt. of India). Carter, M. (2004). Key note address on E-governance – Transforming India. National summit– India: The Knowledge Capital February 17, 2004
124
Davenport, T. H. (1998). Putting the Enterprise into the Enterprise System. Harvard Business Review, 76(4), 121–132. Dejpongsarn, N. (2005). ERP Framework with mySAP Solution. A presentation. Douglas, M. (2002). Planning for the Enterprise. New York, (April): 16. Evans, K. S. (2005). Expanding E-Government: Improved Service Delivery for the American People Using Information Technology. A Report for the President’s Management Council Dec 2005 Flak, L. S., & Rose, J. (2005). Stakeholder Governance: Adapting Stakeholder Theory To E-Government. Communications of AIS, 2005, 16, 642-664, 23 Gartner’s Four Phases of E-Government Model. (2001) USA, Gartner Group. Europa (2001) “Egovernment - Electronic access to public services. (online)(cited on 20th December, 2003). Available from Hunter, D. R., & Shine, S. (2001). Customer Relationship Management- A Blueprint for Government. White Paper, Australia, Accenture. Jha, B., & Bokad, P. (2003).Managing Multiplicity of Citizens’ identity -A Taluka level case study. International Conference on E-Governance, 1(5), 24-31 Kannabiran, G. Xavier, M. J., & Anantharaaj, A. (2005). Enabling E-Governance Through Citizen Relationship Management-Concept, Model And Applications. Journal of Services Research, 4(2) (October 2004 - March 2005). Kauzlarich, V. (2003). Organizational Change Management is Key to Program’s Success. Enterprise Modernization Issue, fall 2003, 7(2).
E-Government and ERP
Khalil, M. A., Lanvin, B. D., & Chaudhry, V. (2002). The E-government Handbook for Developing countries. infoDev Program The World Bank Group. Kirwan, K., Sawyer, D., & Sparrow, D. (2003). Transforming Government Through Enterprise Modernization. Enterprise Modernization Issue, Fall 2003 7(2). Koch, C. (n/a). ABC: An Introduction to ERP Getting started with Enterprise Resource Planning (ERP). http://www.cio.com/article/40323/ABC_ An_Introduction_to_ERP Markus, M. L., Axline, S., Petrie, D., & Tanis, C. (2000). Learning from adopters’ experiences with ERP: Problems encountered and success achieved. Journal of Information Technology, 15, 245–265. doi:10.1080/02683960010008944 Microsoft. Enterprise Resource Planning- Managing the Lifecycle of Government Business. lead feature from the Tourism and Travel edition of Microsoft in Government, Worldwide. Misra, D. C., Hariharan, R., & Khaneja, M. (2003, March). E-Knowledge Management Framework For Government Organizations . Information Systems Management, 20(2), 38–48. doi:10.120 1/1078/43204.20.2.20030301/41469.7 Mullin, R. (2000). ERP-2-ERP: “Forging a Proprietary Link” (Brief Article) . Chemical Week, (June): 28. Nowlan, S. (2001). Citizen Relationship Management E-CRM in the Public Sector,USA. Pricewaterhouse-Coopers. Ort, E. E. (2005). Service-Oriented Architecture and Web Services: Concepts, Technologies, and Tools. Sun Developers Network.http://java.sun. com/developer/technicalArticles/WebServices/ soa2/SOATerms.html#soaterms
Rao, T. P., & Rama, R. Venkata. V., Bhatnagar, S. C., & Satyanarayana, J. (2004). “E-Governance Assessment Frameworks (EAF Version 2.0)” Report for Department of Information Technology, Government of India May 2004 AMR Research, (2002). The Multibillion-Dollar Enterprise Performance Planning Market,16 August 2002. Saxena, K. B. C. (2005). Towards excellence in e-governance. International Journal of Public Sector Management, 18(6), 498-513, 16 Scott-Morton, M. (Ed.). (1991). The corporation of the 90s. Information technology and organizational transformation. Oxford: Oxford University Press. Smith, A. Opinion (2003). Citizen Relationship Management. (online) (cited June 12, 2003). Available from . Stratman, J., & Roth, A. (1999), Enterprise resource planning competence: A model, propositions and pre-test, design-stage scale development. 30th DSI Proceedings, 20-23 November, pp. 1199-201. Torres, L., Pina, V., & Acerete, B. (2006, Apr). E-Governance Developments in European Union Cities: Reshaping Government’s Relationship with Citizens. Governance, 19(2), 277. doi:10.1111/j.1468-0491.2006.00315.x Tucker, R., & Debrosse, D. (2003). Enterprise Architecture: Roadmap for modernization” Enterprise Modernization Issue, fall 2003, 7(2) Xavier, M. J. (2002, April-June). Citizen Relationship Management- Concepts, Tools and Applications. South Asian Journal of Management, 9(2), 23–31.
125
E-Government and ERP
key terms And deFInItIons Clean Data: For an organization to function effectively, data needs to be easily accessible both to customer sand internal users. Data is dispersed as governments like business organizations work in silos. To make this data accessible to others data conversion is a necessity as multiple sources and input formats, inconsistent styles and complexity of data structures. For any new system to get started it is necessary to convert the existing data from legacy systems or manual records to fit into the new data structures. This is a critical factor in ERP implementation projects. By clean data it is meant that there are no duplicates definitions of data which cause inconsistency. Customization: In an attempt to deal with the potential problems presented by existing information systems, there is a shift towards the implementation of ERP packages. Generally it is felt that ERP packages are most successfully implemented when the standard model is adopted. Yet, despite this, customisation activity still occurs reportedly due to misalignment of the functionality of the package and the requirements of those in the implementing organisation. In such a situation the first thought that comes to a layman’s mind is to modify the software to provide the necessary report or layout. Most ERP products are generic. Hence, some customisation is needed to suit the company’s needs. But optimal customisation in most cases is subjective with no definite rules as end-users are not always technically equipped to understand the far-reaching implications of the changes that they are demanding. There is always a risk of destabilising the core application. Customisation can make or break the implementation of an ERP. It is therefore necessary to strike the right balance. E-Government: Government’s foremost job is to focus on safeguarding the nation / state and providing services to society as custodian of the nation’s / state’s assets. E-Government can therefore be defined as a technology-mediated process
126
of reform in the way Governments work, share information, engage citizens and deliver services to external and internal clients for the benefit of both government and the clients that they serve. Enterprise Architecture: An organisation has assigned roles and responsibilities, and established plans for developing products and services. The scope of Enterprise Architecture can be defined as encompassing the whole enterprise with confirmed institutional commitment to deliver products and services, both current and planned with clear transition plans. This in fact would define the organisation structure, functions and the relationships that would facilitate the organisation to meet its desired goals. This is extremely essential for enterprise modernisation. Development of the enterprise architecture will typically involve, analysing the current architecture which will be a process of description, documenting the architecture “as-is” or “baseline” architecture, moving on to a definition of the architecture as it is planned to develop in the future - the architecture as it should be or “target” architecture which would align with the vision of the organisation. Enterprise Resource Planning (ERP): It is an integrated information system that integrates all departments within an enterprise. Evolving out of the manufacturing industry, ERP implies the use of packaged software rather than proprietary software. ERP modules may be able to interface with an organization’s own software with varying degrees of effort, and, depending on the software, ERP modules may be alterable via the vendor’s proprietary tools as well as proprietary or standard programming languages. An ERP system can include software for manufacturing, order entry, accounts receivable and payable, general ledger, purchasing, warehousing, transportation and human resources. The major ERP vendors are SAP and Oracle specialising in transaction processing that integrates various departments. Information infrastructure, it is the technology infrastructure required to manage information in an organisation. It consists of the computers, software, data
E-Government and ERP
structures and communication lines underlying critical services that society has come to depend on. It consists of information systems which cover critical aspects such as financial networks, the power grid, transportation, emergency services and government services. This is required to implement an ERP system. Inter-Operability: Various departments organize work in silos and tend to work independently. Data forms the basis of a transaction processing system like ERP which integrates these departments so that basic data entered once can be used by many. More efficient exchanging and processing of data is required to provide a seamless execution of a service. Inter-operability therefore means the capability of different departments working together to provide a service. Public Administration: Every facet of our daily lives is impacted in some way by the actions of the federal, state, or local bureaucracies that manage and organize the public life of the country and its citizens. Public administration is the study
of public entities and their relationships with each other and with the larger world. It addresses issues such as how public sector organizations are organized and managed, how public policy structures the design of government programs that we rely upon, how our states, cities, and towns work with the federal government to realize their goals and plan for their futures, how our national government creates and changes public policy programs to respond to the needs and interests of our nation. Service-Oriented Architecture: Serviceoriented architecture (SOA) is the emerging trend in enterprise computing because it holds promise of IT becoming more agile in responding to changing business needs. Implementing a service-oriented architecture can involve developing applications that use services, making applications available as services. A service-oriented architecture is an information technology approach or strategy in which applications make use of services available in a network such as the World Wide Web.
This work was previously published in Handbook of Research on Enterprise Systems, edited by Jatinder N. D. Gupta, Sushil Sharma and Mohammad A. Rashid, pp. 346-361, copyright 2009 by Information Science Reference (an imprint of IGI Global).
127
128
Chapter 1.9
Enterprise Application Integration (EAI) Christoph Bussler Merced Systems, Inc., USA
enterPrIse APPlIcAtIon InteGrAtIon (eAI) tecHnoloGy As long as businesses only have one enterprise application or back end application system there is no need to share data with any other system in the company. All data that has to be managed is contained within one back end application system and its database. However, as businesses grow, more back end application systems find their way into their information technology infrastructure managing different specialized business data, mainly introduced due to the growth. These back end application systems are not independent of each other; in general they contain similar or overlapping business data or are part of business processes. Keeping the data in the various application systems consistent with each other requires DOI: 10.4018/978-1-60566-242-8.ch088
their integration so that data can be exchanged or synchronized. The technology that supports the integration of various application systems and their databases is called Enterprise Application Integration (EAI) technology. EAI technology is able to connect to back end application systems in order to retrieve and to insert data. Once connected, EAI technology supports the definition of how extracted data is propagated to back end application systems solving the general integration problem.
bAckGround Typical examples of back end application systems that are deployed as part of a company’s information technology (IT) infrastructure are an Enterprise Resource Planning (ERP) system or a Manufacturing Resource Planning (MRP) system.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Enterprise Application Integration (EAI)
In the general case, different back end application systems store potentially different data about the same objects like customers or machine parts. For example, a part might be described in an ERP system as well as in a MRP system. The reason for the part being described in two different back end application systems is that different aspects of the same part are described and managed. In fact, this means that the not necessarily equal representation of the object exists twice, once in every system. If there are more than two systems, then it might be very well the case that the same object is represented several times. Any changes to the object have to be applied to the representation of the object in all systems that contain the object. And, since this distributed update cannot happen simultaneously (in the general case), during the period of applying the change the same object will be represented differently until the changes have been applied to all representations in all back end application systems. It therefore can very well be the case that during an address update of a customer object the customer has two addresses. Some objects representing the customer have already the new address while others still have the old address. This situation exists until the distributed update is complete. Furthermore, in most cases there is no record of how many systems represent the same object. It might be the case and actually often it is the case that a change is not applied to all objects because it is not known which back end application system has a representation of the object in the first place. Only over time these cases will be detected and rectified, mainly through the resolution of error situations. In summary, the same object can be represented in different back end application systems, the updates to an object can cause delays and inconsistencies, and locations of object representations can be unknown due to missing object registries. A second use case is that precisely the same object is replicated in different back end application systems. In this case the update of the object
in one system has to be applied to all the other systems that store the same object. The objects are replica of each other since all have to be updated in the same way so their content is exactly the same. Only when all the objects are updated they are consistent again and the overall status across the back end application systems is consistent again. In the replicated case it must not be possible that the same object exposes different properties like in the address example above. A third use case is that applications participate in common business processes. For example, first a part is being purchased through the ERP system and upon delivery it is entered and marked as available in the MRP system. The business process behind this is consisting of several steps, namely purchase a part, receive the part, make the part available, and so on. In this case the back end application systems do not share common data, but their data state depends on the progress of a business process and it has to update the back end application systems accordingly. The data will change their state according to the progress of the business process. In this sense they share a common business process, each managing the data involved in it. All these three use cases, while looking quite different from each other, have to be implemented by companies in order to keep their business data consistent. EAI technology (Bussler 2003) (Hohpe and Woolf 2003) allows to accomplish this at it provides the necessary functionality as described next.
enterPrIse APPlIcAtIon InteGrAtIon tecHnoloGy Enterprise application integration technology addresses the various scenarios that have been introduced above by providing the required functionality. In the following the different functionalities will be introduced step by step. First, EAI technology provides a reliable communication
129
Enterprise Application Integration (EAI)
mechanism so that any communication of data between back end application systems is reliable. This ensures that no communication is lost. This is often achieved by sending messages through persistent and transactional queuing systems (Gray and Reuter 1993). Queuing systems store messages persistently and transactionally achieving an exactly-once message communication. EAI technology uses adapters (J2EE Connector Architecture 2007) in order to connect to the proprietary interfaces of back end application systems. An adapter knows the proprietary interface and provides a structured interface to the EAI technology. This allows EAI technology to connect to any back end application system for which an adapter is available. If no adapter is available for a particular back end application system then a new one has to be built. Some EAI technologies provide an adapter development environment allowing new adapters to be constructed. Adapters can retrieve data from back end application systems as well as store data within back end application systems. They therefore enable the communication of the EAI technology with the back end application systems. EAI technology by means of connecting to all required back end application systems with adapters can propagate data changes to all of the back end application systems. For example, this allows ensuring that all representations of an object in several back end application systems can be changed as required. EAI technology is required to process changes in an order preserving fashion to ensure object consistency. For example, an update of an object followed by a read of the object value must be executed in the given order. Not following the order would return an incorrect value (i.e., the value before instead of after the update). Any negative consequences of delay and inconsistencies are avoided by ensuring that every request is processed in the order of arrival. In early implementations of EAI technology publish/subscribe technology was used to establish the communication rules (Eugster et al. 2003). A
130
back end application system that has interest in specific objects and changes to them subscribes to those. Every back end system publishes changes like object creation, update or deletion. If there is a subscription that matches a publication, the same changes are applied to the subscribing back end application system. While publish/subscribe technology is quite useful, it is not able to implement business process integration (Bussler 2003). In this case several back end application systems have to be called in a specific order as defined by a business process. Sequencing, conditional branching and other control flow constructs define the invocation order of the back end application systems dynamically. Publish/subscribe cannot express this type of multi-step execution and more expressive technology is required. EAI technology incorporated workflow management systems in order to enable more complex and multi-step executions. Business process based integration requirements are addressed through workflows. Through a workflow specification the overall invocation order can be established and complex processes like human resource hiring or supply-chain management are possible. Workflow steps interact with back end application systems through adapters whereby the workflow steps themselves are sequenced by workflow definitions. While business process based integration is a significant step in the development of EAI technology, it is not sufficient in heterogeneous environments. For example, different back end application systems follow in general different data models. If this is the case the same type of data is represented in different ways. For example, one back end application system might implement an address as a record of street name, street number, city and zip code while another back end application system might implement an address as two strings, “address line 1” and “address line 2”. If data is transmitted from one back end application system to another one the address has to
Enterprise Application Integration (EAI)
be mediated from one representation to the other representation for the data to be understood by the receiving back end application system.
current develoPments And Future trends The latest development in EAI technology takes data heterogeneity into consideration and provides mediation technology. For example, Microsoft’s Biztalk Server (Microsoft 2007) has a mapping tool that allows the graphical definition of mapping rules that can transform a message in one format into a message of a different format. Tools from other vendors follow the same approach. The research community is also working on the data mediation problem and an overview is given in (Rahm and Bernstein 2001). Also, this new generation realizes that back end application systems expose not only data but also public processes that define interaction sequences of data (Bussler 2003). A public process allows to determine which messages a back end application systems sends or receives and in which order. This allows ensuring that all messages are exchanged appropriately with the back end application system. Web Services are a relatively new approach to communicate data and message over the Internet (Alonso et al. 2004). Web Services are starting to be used in EAI technologies in several places. One is for the communication between back end application systems and the EAI technology itself. The other place is to define the interface of back end application systems as Web Services. In this case all back end application systems have a homogeneous interface technology they expose and an EAI technology does not have to deal with the huge variety of interfaces of back end application systems any more. Architectural approaches based on Web Service technology are emerging. For example, BEA (BEA 2007) provides a whole suite of technologies that
support the EAI problem based on Web Service technology. Other vendors like IBM (IBM 2007), Microsoft (Microsoft 2007) or Oracle (Oracle 2007) are also offering SOA based products. The architecture underlying Web Service based approaches is called Service-Oriented Architecture (SOA) and standardization work is ongoing, too (OASIS SOA RM 2007). Further out are Semantic Web technology based approaches. They are in a research state today but can be expected to be available in commercial technology later on. A significant effort applying Semantic Web technology in order to solve the integration problems is the Web Service Modeling Ontology (WSMO) (WSMO 2007). This ontology, which is developed by a significant number of organizations, uses Semantic Web Technology to define a conceptual model to solve the integration problem that encompasses EAI integration. The main conceptual elements proposed are ontologies for defining data and messages, mediators for defining transformation, goals for dynamically binding providers and web services for defining the interfaces of communicating services. In order to define a specific EAI integration, the formal Web Service Modeling Language (WSML) (WSML 2007) is defined. This can be processed by a WSML parser. In order to execute B2B integration the Web Service Modeling Ontology Execution environment (WSMX) (WSMX 2007) is developed. It serves as a runtime environment for integrations defined through WSML. Another Semantic Web technology based approach is OWL-S (OWL-S 2007). Very recently a first standard in the Semantic Web Service space was established as a W3C recommendation. It is called SAWSDL and can be retrieved at (SAWSDL 2007). The main concept behind this standard is the augmentation of the WSDL approach for describing Web Service interfaces.
131
Enterprise Application Integration (EAI)
Table 1. A Summary of Critical Issues Autonomy Back end applications are autonomous in their state changes and EAI technology must be aware that since it cannot control the application’s state changes Communication EAI technology must provide reliable and secure communication between back end application systems in order to ensure overall consistency Delay Changes of objects represented in several back end application systems might not happen instantaneously due to the time required to do the update and so a delay can occur between the update of the first and the last change Distribution Back end application systems are distributed in the sense that each has its own separate storage or database management system with each being separately controlled and managed Duplication Objects that have to reside in several back end application systems might require duplication in order to ensure back end application state consistency Heterogeneity Back end application systems are implemented based on their own data model and due to the particular management focus the data models differ in general not complying with a common standard
crItIcAl Issues oF eAI tecHnoloGIes The critical issues of Enterprise Application Integration are listed in Table 1. Current generations of EAI technology have to address these issues in order to be competitive and considered appropriate implementations to the EAI problem. The same applies to research. Recently research as well as new products focuses on Web Services as a means to integrate applications. However, Web Services are only a new technology and all the requirements discussed above and issues listed below still apply. Semantic Web Services (WSMO 2007) addresses the requirements and issues taking all aspects into consideration.
132
Inconsistency If a delay happens while changing different representations of the same object inconsistencies can occur if these different representations are accessed concurrently Mediation Due to heterogeneity of back end application systems objects are represented in different data models that have to be mediated if objects are sent between applications Process management Multi-step business processes across back end application systems require process management to implement the particular invocation sequence logic Publish/subscribe Back end application systems can declare interest in object changes through subscriptions that are matched with publications of available object changes Reliability The integration of back end application systems must be reliable in order to neither loose data nor accidentally introduce data by retry logic in case of failures Replication Replication is a mechanism that insures that changes of an object are automatically propagated to duplicates of this object. The various representations act as if they are one single representation Security Communication of data between back end application systems through EAI technology must ensure security to avoid improper access
conclusIon Enterprise Application Integration (EAI) technology is essential for enterprises with more than one back end application system. Current EAI technology is fairly expressive being able to handle most of the integration tasks. Newer developments like Web Services (Web Services 2004) and Semantic Web Services (WSMO 2004) (OWL-S 2007) will significantly improve the situation by introducing semantic descriptions making integration more reliable and dependable.
reFerences J2EE Connector Architecture (2004). J2EE Connector Architecture. java.sun.com/j2ee/connector/
Enterprise Application Integration (EAI)
Alonso, G., Casati, F., Kuno, H., & Machiraju, V. (2004). Web Services-Concepts, Architectures and Applications. Springer-Verlag. BEA. (2007). BEA Systems, Inc. www.bea.com. Bussler, C. (2003). B2B Integration. SpringerVerlag. Eugster, P., Felber, P., Guerraoui, R., & Kermarrec, A.-M. (2003). The many faces of publish/ subscribe. [CSUR]. ACM Computing Surveys, 35(2). doi:10.1145/857076.857078 Gray, J., & Reuter, A. (1993). Transaction Processing: Concepts and Techniques. Morgan Kaufmann. Hohpe, G., & Woolf, B. (2003). Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions. Addison-Wesley Professional. IBM. (2007). International Business Machines, Corp. www.ibm.com. Microsoft (2007). Microsoft Corporation. www. microsoft.com. OASIS SOA RM. (2007) www.oasis-open.org/ committees/tc_home.php?wg_abbrev=soa-rm Oracle (2007). Oracle Corporation. www.oracle. com. OWL-S. (2007). OWL-S. www.daml.org/services/ owl-s/. Rahm, E., & Bernstein, P. (2001). A survey of approaches to automatic schema matching. The VLDB Journal, 10, 334–350. doi:10.1007/ s007780100057
SAWSDL. (2007). www.w3.org/2002/ws/sawsdl/. Web Services. (2007). Web Service Activity. www. w3.org/2002/ws/ WSML. (2007). Web Service Modeling Language. www.wsmo.org/wsml WSMO. (2007). Web Service Modeling Ontology. www.wsmo.org WSMX. (2007). Web Service Modeling Execution Environment. www.wsmx.org
key terms And deFInItIons Adapters: Adapters are intermediate software that understand proprietary back end application interfaces and provide easy access interfaces for EAI technology integration. Back End Application Systems: Software systems managing business data of corporations. EAI Technology: Software systems that provide business-to-business integration functionality by sending and receiving messages and retrieve and store them in back end application systems. Process Management: Process management allows defining specific invocation sequences of back end application systems in context of EAI. Publish/Subscribe: Publish subscribe is a technology whereby interest can be declared and is matched upon the publication of object changes. Workflow Technology.: Workflow technology is a software component that provides the languages and interpreters to implement process management.
This work was previously published in Handbook of Research on Innovations in Database Technologies and Applications: Current and Future Trends, edited by Viviana E. Ferraggine, Jorge Horacio Doorn and Laura C. Rivero, pp. 837-843, copyright 2009 by Information Science Reference (an imprint of IGI Global).
133
134
Chapter 1.10
Enterprise Tomography: An Efficient Approach for SemiAutomatic Localization of Integration Concepts in VLBAs Jan Aalmink Carl von Ossietzky University Oldenburg, Germany Jorge Marx Gómez Carl von Ossietzky University Oldenburg, Germany
AbstrAct Enterprise tomography is an interdisciplinary approach for an efficient application lifecycle management of enterprise platforms and very large business applications (VLBA). Enterprise tomography semi-automatically identifies and localizes semantic integration concepts and visualizes integration ontologies in semantic genres. Especially delta determination of integration concepts is performed in dimension space and time. Enterprise tomography supports software and data comprehension. SMEs, large scaled development organizations and maintenance organizations can benefit from this new approach. This methodology is useful for tracking database changes of business processes or coding DOI: 10.4018/978-1-60566-856-7.ch012
changes within a specific domain. In this way root cause analysis is supported.
IntroductIon This chapter covers an interdisciplinary approach for an efficient Application Lifecycle Management of Very Large Business Applications (VLBA) and Enterprise Platforms. To be more precise, Enterprise Tomography is primarily seen as a new methodology for supporting distributed software engineering teams in their incremental business development tasks and during their enterprise software maintenance phases. We regard enterprise software along with its meta-data, business data and contextual data as an abstract information source. In this extended abstract data
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Enterprise Tomography
universe our approach semi-automatically identifies semantic coherent entities of interest. We propose an algorithm for tracking the changes in this data universe in the dimension time and space. In contrast to Web 2.0 search engines, we apply advanced indexing techniques. To meet developers and maintainers needs to the greatest extent possible, we take integration ontology extraction algorithms into consideration, enable controllable domain-specific indexing, apply delta analysis based on indices and visualizes search results of the Delta-Operator in semantic categories. Furthermore, Enterprise Tomography includes sharing of integration knowledge from individual integration experts across the enterprise development and maintenance community. In Enterprise Software Industry development and maintenance of VLBAs and Enterprise Platforms is getting more and more complex. Large and distributed teams are involved, teams are changing and division of labor proceeds. Agile development methods assume efficient development and maintenance means for software and business data evaluation. Without knowing the semantic integration of enterprise software it is inherently difficult to control and orchestrate large scaled development and maintenance processes. Domain-specific enterprise integration knowledge, coded in enterprise software, is normally not instantaneously available for development teams. Lack of precise knowledge of integration concepts in development and maintenance phases results in erroneous software, is risky and might have negative business impact for both the software manufacturer and the consumer. In this paper we present a semi-automatic environment for Application Lifecycle Management of VLBAs and Enterprise Platforms based on enhanced standard scientific algorithms. In accordance with medicine diagnostics, we utilize the metaphor Enterprise Tomography for scanning, indexing, identifying, visualization and delta visualization of enterprise integration knowledge
containing in enterprise software conglomerates. Based on the results of the Enterprise Tomograph the operating teams are in the position to make efficient decisions and have a reliable foundation for incremental steps of the development and maintenance life cycle. The Enterprise Tomograph represents a central ecosystem for sharing domain specific integration knowledge across the development teams. Because of sharing and socializing of integration knowledge across SCRUM teams, the Enterprise Tomography approach can be incorporated in the Enterprise 2.0 initiative. In the research area VLBA (Very Large Business Applications) located within business informatics, Application Lifecycle Management is the center of attention (Grabski, et. al, 2007). A real life VLBA, in dimension time, is in a permanent flux: Gradual development to meet business requirements, continuous improvements to exploit innovation, continuous maintenance to keep consistent business processing, horizontal connection of software systems to scale data processing and to extend business scope, recombination of loosely coupled services to exploit new business functionality, re-configuration and personalization, data evolution resulting from service calls and business transactions and background processing, just to name a few. VLBAs are not usually built from scratch and deployed in an immutable stadium (Opferkuch, 2004; Ludewig & Opferkuch, 2004). VLBAs are not monomorphic. Some of the characteristics of VLBAs are: Complexity, a long life-cycle, huge continuous development and maintenance efforts, large user groups, inter- and intra-enterprise coupling. Today, in business reality, VLBAs are conglomerates of inter-operating software systems. Technical and semantic integration are the ‘DNA’ of a VLBA. Integration can cross both system and system boundaries. In this chapter we want to propose and outline a generic algorithm that makes this VLBA integration visible and tangible
135
Enterprise Tomography
from different perspectives in different semantic genres. Moreover, a delta operator is supposed to make the integration difference between points of time t0 and t1 visible. Having in mind that VLBAs consist of heterogeneous constituents, we need to have an abstract holistic view on the normalized integration aspects. Beyond software, we also take persistent data, meta data, system logs, business process activity journals, virtual and transient data, solution databases, model databases and so forth into consideration. So, we not only take the software of the Enterprise Platform itself into consideration but the business data and meta data and contextual data as well. Integration is a polymorphic topic. We regard integration on different levels of granularity. For instance, on a low level of granularity, dynamic method calls can be seen as an integration aspect. On a medium level of granularity, cross component service consumption can be seen as an integration aspect or security as a cross cutting integration scattered in a VLBA. Registered integration content (e.g. message routing rules of integration hubs in enterprise service bus) is to be regarded as high level integration. Logistical quantity flow, accounting value flow and financial data flow are also prominent examples of integration on high granularity level. Workflow, Webflow and Process Integration can be regarded as integration on high granularity level as well. Developer teams and maintenance teams have different perspectives on VLBAs in their daily work: Developers are primarily assembly focused (bottom up), whereas maintainers have the inverted top down view on a VLBA: Maintainers are thinking rather in error symptoms and ontologies provoking the error symptoms (Abdullah, 2001). For example, they need to find out for a given (inconsistent) production order id the involved coding and meta data, the header and items in the persistency layer, material consumption, confirmation items, models, relevant documentation etc. In this example we regard integration as a semantic join between those concepts. It is valu-
136
able information to see the delta of this semantic join between points of time t0 and t1. So one can track the evolution of the integration of a VLBA. A comparison of VLBAs (e.g. modified code clones) will be possible with our algorithm as well. So it might be very interesting to evaluate the software Add-On between two releases or the delta in the business data. In accordance to tomography in medical diagnostics, we utilize a similar metaphor: The Enterprise Tomograph. It is supposed to diagnose integration aspects of a complex VLBA, perform indexing of all scanned information, and provide time efficient access to the assembled scanned data from different semantic perspectives. Especially the integration aspect delta is supposed to be made available for large maintenance and service teams. Based on this information maintenance teams can locate real error symptoms in a VLBA more easily and they are in a better position to assess the consequences of any change in the VLBA software and therefore mitigate the risk of inconsistent data processing. In the scope of this chapter we assume that in an enterprise software factory the construction procedure of a VLBA is very similar to that of the building industry. The first step is requirement specification, architecture modeling, review and acceptance, construction and finally continuous maintenance. The construction procedure in both industries is quite similar: If architecture models are in place, the construction procedure starts bottom up: Beginning from the foundation and ending in the construction of the roof, not vice versa. Traditional top down model driven approaches of Software Engineering do not necessarily meet the needs of service and maintaining teams in the context of VLBAs adequately. Models are important, but usually those models are not linked to the VLBA software. To bridge this technical gap, the Enterprise Tomograph comes into place with multiple phase iteration: The Enterprise Tomograph knows three phases. The first phase in concerned with scanning of the VLBA, the
Enterprise Tomography
intermediate phase constructs and prepares indices for efficient access and the third phase provides access to integration data. During the time-consuming scanning procedure (parallel crawling of VLBA), the Enterprise Tomograph orchestrates concept mining algorithms. The mining algorithms extract models, software fragments, links of software fragments, business objects, technical objects, meta data, solution databases and transforms those to standard ontology representations that are grammar compliant. The resulting ontology is the integration knowledge representation. We take a subset of the ontology representation standards into consideration: The concept mining algorithms are supposed to map the integration ontology to rooted unordered labeled trees. The set of rooted unordered labeled trees are to be indexed and stored in PAT Arrays (PAT Array as a space efficient and access optimized data structure of Patricia Tries and suffix trees; PAT Arrays are well known in genetic engineering and information retrieval). One theme of the Enterprise Tomograph will be the Enterprise Introspector. It generically calculates the footprint of a business transaction execution in the persistency layer, the footprint of a service call or the footprint of a message choreography process instance between t0 and t1, or the delta between two database schema in a specific domain area. The basis algorithm of the Enterprise Tomograph is a modified DC3-algorithm (difference cover modulo 3 known in information retrieval (Dementiev, 2006; Burkhardt, 2006). It constructs PAT Arrays and stores textual suffixes in a compressed manner. The set of resulting PAT Arrays are organized in a genre-layered tenancy concept allowing independent user groups working on the global index and updating it in a concurrent polychronic way. There will be defined a quasi algebra on the set of PAT Arrays: Delta operator, plus operator, pseudo inverse operator. For instance the plus operator merges 2 indices (PAT Arrays) resulting in one encompassing PAT Array.
Integration deltas are determined by modified tree distance algorithms (Lu, et. al., 2001) and displayed in a structured textual manner. Typically integration concepts are best known by individual developers and maintainers of the VLBA community. They may want to register additional concept mining algorithms to the integration ecosystem of the Enterprise Tomograph. In this way the integration knowledge of a few integration experts can be shared across the community. The Enterprise Tomograph provides a domain-specific search UI and displays the integration aspects. The output can be iteratively refined and may serve as input/workload for the Enterprise Engineer for further processing. For example, refactoring, navigation, semantic tagging, search & replace within integration genres, multiple breakpoints, code & data compare, undo. The Enterprise Tomograph itself is dedicated for service enabling and can be hosted in a VLBA side-by-side mode. It provides high efficient logarithmic access to integration concepts of a VLBA in the space time continuum. Enterprise Tomography is a proposed collaborative index-snapshot based and domain-specific approach for efficient incremental development and maintenance of VLBAs in the enterprise software industry. Figure 1 outlines the Enterprise Tomography approach in the context of cross-organizatorial VLBAs. The Enterprie Tomograph crawler extracts integration ontology forests from VLBAs, Enterprise Platforms and databases. In-memory indices and delta indices are created. Via search UI a full domain-specific search is possible. In the Delta Monitor changes of integration concepts can be traced.
enterprise tomography - use cases and scenarios First of all Enterprise Tomography supports program comprehension in maintenance and development phases of VLBAs and Enterprise
137
Enterprise Tomography
Figure 1. Enterprise tomography for VLBAs and enterprise platforms
Platforms. Views from different semantic perspectives can be projected on a domain or area of expertise. These views can incrementally be refined to a set of integration concepts of interest. In this way a dynamic model is derived from the enterprise software and its contextual data. Maintainers and developers are in the position to make assessments for side effects of software changes and enhancements. Furthermore, Enterprise Tomography supports refactoring of Enterprise Platforms and VLBAs in software engineering development phases. Crosssystem where-used lists of integration concepts can easily be generated. Textual search and replace functionality based on the where-used lists can be provided. The search result list of the Enterprise Tomograph serves as workload for refactoring tasks. In-place navigation to software fragments and contextual data enables software engineers to drill down into the software and business data universe. The Enterprise Tomograph can be seen as an abstract navigation tool for software fragments,
138
entities and business data. Depending of the search criteria a launch pad can be generated. This launch pad points to the appropriate entitiyassociated operations. In contrast to Page-Rank prioritization in Web 2.0 search, the Enterprise Tomograph prioriztizes integrations concepts in search result list based on reuse frequency. This means integration concepts with highest reuse frequency appears first. Because of global impact in VLBA software, changing of those integration concepts requires highest attention. Regarding real world VLBAs and Enterprise Platforms there is normally no semantic linkage between coding fragments, metadata, models, configuration data, business data, corrections, consulting documents, documentation, process journals, system logs, training material, and contextual data. Taking those information sources into consideration, the Enterprise Tomograph joins development and maintenance relevant information and provides ubiquituous search and delta tracking functionality. Integration ontologies serves as an interlingua for the participating sys-
Enterprise Tomography
tems. Integration ontology extracting algorithms generate normalized ontologies. With a generic search UI software engineers can locate and visualize cross-cutting concerns. In Test Driven Development (TDD) software engineers needs to reiterate business processes. A generic inverse operator for business processes is advantageous. Enterprise Introspection as a spezialization of Enterprise Tomography is focused on business data and configuration data of VLBAs and Enterprise Platforms respectively. The Enterprise Introspector visualizes the data delta (database footprint) of a business process, of a business transaction or of a service call sequence in a domain. Based on the delta index the Enterprise Introspector enables restoring the business process data to its former condition. The business process chain can be executed again. Traditional IDEs support debugging functionality. Breakpoints can be set directly in coding. Conditional breakpoints and watchpoints can be defined. Typically, in enterprise software industry software maintainer are not the authors of the coding they are responsible for. This means it is very difficult for software maintainer to identify appropriate breakpoints, not to mention to define complex conditional breakpoints. With refinement technique, the Enterprise Tomograph supports software engineers to find appropriate software fragments. The contextual data for finding appropriate locations is taken into consideration. Along the lifecycle of VLBA and Enterprise platforms, quality management engineers need indicators for quality assessment purposes. Software metrics and architecure metrics are appropriate means for that purpose. The Enterprise Tomograph attach metric figures to integration ontology concepts or generates virtual metric entities in tree based formats. Delta tracking of the metric figures on a domain is facilitated by the Enterprise Tomograph. Integrated exception handling and alerting are efficient functions for setting up regulator circuits in software engineering.
Enterprise software often is deployed in releases. Codelines are the pendants in software logistics. Development teams are focused on areas of expertise or domains respectively. Taking functional upward-compatibility into consideration, functionality of lower releases is a subset of functionality of higher releases. To put it in other words: release n+1 = release n + delta. Typically the codbase of delta is significant smaller than the codbase of release n. Release n+1 and release n are similar code clones and nearly identical. The Enterprise Tomograph serves as a engineering environment for controlling the code clones: Incremental deltas can be tracked with the delta monitor. The delta can be visualized in semantic genres. E.g. new service calls, new code fragments, new reusables, new business objects, new UI Objects, new metadata or new configuration data between code clone A and code clone B. Corrections to be made in lower releases have normally to be upported to the latest release. This is a prominent example when code clones comes into play: the delta ist the correction of release 1..n. . The same applies for downports of funktionality to lower releases.The Enterprise Tomograph supports to add the delta to each individual release. Sharing integration knowledge amongst SCRUM teams and within SCRUM teams is an essential use case. The Enterprise Tomograph serves as a SCRUM environment. It provides an infrastructure for sharing integration knowledge to be categorized in semantic genres. A full search in a genre and a search on delta in a genre allows efficient location of involved integration concepts. This supports comprehension of Integration in Enterprise Platforms and VLBAs. Generic operations and navigation based on the located integration concepts allow further development of the enterprise software. This includes mass processing on the worklist provided by search result of the Enterprise Tomograph. Changes and enhancements made by the SCRUM teams can be visualized instantaneously. In the delta monitor the results can be verified.
139
Enterprise Tomography
enterprise tomography Placement in Agile development and Agile maintenance In Enterprise Business Engineering the evolution of VLBA integrity is to be regarded in a holistic way. In Application Life Cycle Management the VLBA software itself cannot be taken isolated into consideration; there are multiple correlations between the VLBA software, the business data, the contextual data like documentation, solution databases, e-training databases and model databases, just to name a few. In our approach we extract Enterprise Integration Ontologies out of this holistic data universe, map the retrieved ontologies to a hierarchical textual representation, perform parallel indexing on the textual representation, organize the indices on abstract domain levels and make the integration knowledge available for development and maintenance teams via a search engine. The main contribution of our approach is an efficient algorithm for delta determination of Enterprise Integration Ontologies. Figure 2 illustrates the embedding of the Enterprise Tomograph in the SCRUM development process. SCRUM teams are concerned with refactoring, error detection and functional enhancement of VLBA software. For the sake of consistent change of the software, SCRUM teams need during their operations permanently an actual view on the integration concepts in a domain of a VLBA software. SCRUM development is inherently an incremental approach. The software engineers needs to track the progress of their teamwork between subsequent point of times and verify their enhancements and modifications. The Enterprise Tomograph provides extended domainbased search capabilities to identify the integration concepts of interest. For instance when executing a business transaction on the Enterprise Platform the Enterprise Tomograph visualizes the footprint (data delta) on the database between two points of time. Beyond that the Enterprise Tomograph visualizes software changes and documentation
140
changes model changes amongst others. The Enterprise Tomograph abstracts the technical data sources of the enterprise data universe and makes integration knowledge accessible. The Enterprise Tomograph can be feeded with integration ontology mining algorithms. In real world scenarios those algorithms are provided by few integration experts. In this way the integration knowledge is shared by the SCRUM teams using the Enterprise Tomograph. The Enterprise Tomograph functions as a generic domain-specific search engine based on registered ontology mining algorithms. To make our approach applicable for real world VLBAs and Enterprise Platforms we assume the existence of a dendrogram and an enumeration of the dendrogram nodes. According Figure 3 a dendrogram categorizes the software and contextual data fragments in domains (sub-trees in the dendrogram). A dendrogram sub-tree bundles semantic related entities of a VLBA. The dendrogram must cover the complete VLBA software. A visitor enumerates the dendrogram nodes. Ontology mining algorithms operate on dendrogram nodes. Indices are created and organized in multidimensional grids spanned by axis of integration genres and dendrogram nodes. Moreover we assume that the dendrogram can be processed randomly and independently. This implies that the nodes are the basis granularity for parallel processing of ontology mining. For instance in SAP Enterprise Platforms the package hierarchy or application component hierarchy are representatives of a dendrogram. In a typical scenario software maintainers are starting with error symptoms of a VLBA and are interested in the involved integration concepts and the related entities. For instance a production order 4711 has inconsistent confirmations on consumed components. The maintainer would start with a node of the manufacturing domain. The Enterprise Tomograph in this case would explode the associated sub-tree and merge the attached indices to a domain index. The resulting index
Enterprise Tomography
Figure 2. Application lifecycle management with enterprise tomograph
can be queried with the production order id. As a result he gets the data structure of the production order, the production order instance, persistency, documentation and the production order software fragments of job floor planning. In the next step he can refine his query for the related components. Now he may want to create new confirmations on a production order component. In a subsequent step he can track the evolution of the identified ontology concepts (in our example the production order component confirmations) with the delta operator of the Enterprise Tomograph. Based on the delta results the software engineer has the context of the error symptom as a foundation for his diagnosis. For deeper understanding Figure 4 displays a VLBA in an abstract space-time continuum. A vector in this space represents a dendrogram node index at a point of time . The x-axis represents the instances of the VLBA software and its business data persistency at different points of time. The y-axis enumerates the locations of the VLBA software within the software landscape. The z-axis
enumerates the domains of the dendrogram path beginning from root node to leaf node. The difference of 2 vectors is calculated by the Enterprise Tomograph Delta Operator. In our example we see the difference of SCM (coding + data at different points of time) on a fixed location. Another use case is the comparison of a VLBA at different locations on the y-axis. For example the difference of a VLBA in a development system and in a production system can be determined. This means we compare code clones. In this regard the difference of configuration data in different software locations is valuable information as well. On the first sight our approach is similar to traditional Web 2.0 search engine techniques. The VLBA data universe is crawled, indexed with semantic data organization plus querying via search UI by developer and maintainers. To meet developers and maintainers requirements, we need to make optimizations in different aspects as shown in Figure 5. In addition to traditional approaches, in the Enterprise Engineering context we need to focus on domain specific search refine-
141
Enterprise Tomography
Figure 3. Dendrogram of a VLBA
Figure 4. VLBA data universe
142
Enterprise Tomography
ments and a flexible delta calculation based on domains. Generic operators on generic in memory data representations are the main focus of this scientific paper. Enterprise Tomography is focused on integration ontologies and their evolution. It can be regarded as a special case of ontology integration (Abels, et. al., 2005; Nicklas, 2005).
enterprise tomograph - building blocks and data Flow The Enterprise Tomograph is divided in building blocks for time consuming tasks (data extraction and data organization) and building blocks for time efficient tasks like querying, index merging, and visualization. Figure 6 illustrates the anatomy and high level architecture of the Enterprise Tomograph. First of all the VLBA dendrogram is determined. Based on a dendrogram the Enterprise Tomograph starts the VLBA crawling for containing nodes. For each node and for each semantic genre an Ontology Mining Algorithm is executed.
It supplies rooted unordered labeled trees. Those trees are sequenced. The skewed algorithm DC3 (difference cover modulo 3) is performed on the sequenced data. The resulting index and the original sequenced data is compressed and in-memory organized in an 2 dimensional node-genre grid. Each element of this grid points to a compressed PAT Array and its associated sequenced data. Based on a VLBA dendrogram this procedure can be repeated for different data sources, i.e. for different VLBA locations or for different point of times. Later on the data sources can be compared with the Enterprise Tomograph Delta Operator. When the developer needs to perform a search in a domain, he selects a dendrogram node. The node sub-tree is exploded and all node-associated PAT Arrays together with sequenced data are decoded. The PAT Arrays are merged. The results are published in the Domain Index Layer. A Query Engine executes queries based on the domain index. Monitor 1 visualizes the search results. Now the developer may want to refine the search results. The search results are sequenced and re-indexed
Figure 5. Design principles of the enterprise tomograph
143
Enterprise Tomography
with DC3 Algorithm. A new PAT Array comes into being, is coded and placed into a data source which is published to the Domain Index Layer again. The Query Engine searches for the new search pattern and the refined results are displayed again in Monitor 1. This search-refinement round-trip is depicted in Figure 6 with the circular arrow. The phases of the refinement round-trip can be started asynchronously: While the user is evaluating search results on Monitor 1, the time consuming PAT Array construction via DC3 can be started in parallel as preparation for the next refinement query. In this way the user is not aware of interruptions of subsequent time consuming index construction. In delta mode of the Enterprise Tomograph, delta calculation is performed on two data sources. As explained later on in more detail, domain indices of source m and source n are merged together with watermark data. Delta trees of both tree sets are determined. Monitor 1 serves for displaying of delta trees of source m in full version whereas Monitor 2 displays delta trees of source n in full version. The edit scripts of the delta trees are visualized in Delta Monitor. The main part of the VLBA crawling mechanism is the ontology mining algorithm as outlined
in Figure 7. Basically for a given dendrogram node and a semantic genre a forest of rooted unordered labeled trees is determined. The VLBA data universe serves as the input. Data extraction happens with subsequent stemming algorithms for filtering significant information relevant to the integration context. According to rule sets, ontologies are calculated and normalized. The resulting trees are annotated with tree hash values and with tree node hash values as labels. The view projects tree nodes to the node set of interest. With parameter the behavior of the ontology mining algorithm is influenced. Ontology Mining Algorithms reside in a framework and are orchestrated according to the Inversion of Control Design Pattern. Integration ontology mining algorithms can be registered by integration experts. Rooted unordered labeled trees are used for representing integration concepts. An exemplary integration ontology instance is displayed in Figure 08. Here the business object instance relations, coding fragments, data persistency, APIs amongst others are highlighted. Trees containing the integration concepts are sequenced to a textual data, that is indexed with DC3 algorithm.
Figure 6. Anatomy and high level architecture of enterprise tomograph
144
Enterprise Tomography
Figure 7. Integration ontology mining
enterprise tomograph - delta operator In this chapter we want to examine the delta determination and delta visualization of enterprise
integration ontologies. Tree distance algorithms generated considerable interest in the research community within the last years. Assuming there are 2 ontology trees originating from different data sources as depicted in Figure 9. Each node is
Figure 8. Example for integration ontology
145
Enterprise Tomography
annotated with a label, i.e. with a hash value calculated by the ontology mining algorithm mentioned before. Now the question arises which minimal set of operations need to be performed on tree F to retrieve tree G. This minimal set of operations is called edit script. The minimal number of edit operations is called edit distance. The edit script serves as a basis for visual mapping of tree F into tree G in the Enterprise Tomograph Delta Monitor. It is worth noting that as of today delta determination for rooted unordered labeled trees is considered as NP-complete. Delta determination of ordered labeled trees is much more efficient. Because of paging in the Delta Monitor, it is not mandatory to determine all tree deltas at a time. Although extremely time consuming procedure for large trees, delta determination for reasonable sized unordered labeled trees can be performed efficiently on demand at visualization time. In contrast to the original definition of labeled trees (Shasha, 1992), we take non-unique labeled Figure 9. Tree distance visualization with edit script
146
trees into consideration. We assume that delta determination for unordered non-unique labeled trees is a relaxative or similar problem in comparison to delta determination for unique labeled trees. A proof for this can be made with Simplex Algorithm known in Dynamic Programming (Bille, 2005). This issue is not detailed in this paper. In the next section we want to enlighten the skewed DC3 algorithm for PATArray construction. As explained in Figure 10, PAT Array construction means an indirect lexicographical sort of semifinite suffixes of a given string. In (Burkhardt, 2006) is is proven that this construction consumes linear operations. The DC3 Algorithm is a skewed divide & conquer algorithm. Basically the steps mentioned in Figure 10 are executed. A detailed pseudo algorithm and a concrete implementation of DC3 is given in (Burkhardt, 2006). PAT Arrays are space efficient data structures containing indirect hierarchical lexicographic sort order according to Patricia Tries. Extended PAT
Enterprise Tomography
Figure 10. DC3 algorithm for PAT array construction
Arrays accelerates the search procedure. Longest Common Prefix LCP can be skipped during search (Abouelhoda, et. al., 2005). A drawback of the extension is the additional space consumption in PAT Array. PAT Arrays in the context of Enterprise Tomograph are advantageous because PAT Arrays can be merged in linear time. A PAT Array is the basis for logarithmic indirect search. Assuming all semi-finite suffixes with prefix = search string are to be identified. A’ points to semi-finite suffixes. A’ is divided in 2 intervals: [left bound...medium] and [medium... right bound]. Medium serves as a basis for indirect comparison: if search string is lower than semifinite suffix of medium then the new right border:= medium otherwise new left border:= medium. The new medium is determined in the middle of the new interval. This procedure is re-iterated until border convergence. All lexicographical neighbors can be found in a row direct behind the location identified previously via indirect logarithmic search.
The Delta Operator of the Enterprise Tomograph is concerned with detecting Delta Trees of two forests originating from different data sources. As shown in Figure 12 the intersection represents integration ontologies existing in both data sources A and B. A subset of this intersection are delta integration ontologies and needs to be identified and visualized according to Figure 09. The delta identification algorithm is explained in the next section in more detail. The basic idea for identifying delta trees is to integrate watermarks into the sequenced forest text content. As outlined in Figure 12 a watermark is assembled of a 4 fixed-lenght tuple with hash value of TreeID, hash value of tree, the location of the tree and the offset of the sequenced tree. Watermark integration is done for all trees in both forest A and B. PAT Array construction is done for the textual content of A and B each. The resulting PAT Arrays are merged. On this constructed PAT Array a binary indirect search is performed for the tag . Watermark neighbors can be found in a sequence. Iterating over this
147
Enterprise Tomography
Figure 11. PAT array search
Figure 12. Delta trees
sequence we can easily determine trees located in A only, in B only, trees in A and B, and delta trees in A and B. Delta trees have a common prefix ‘Hash Value TreeID’ with different hash value tree in consecutive PAT Array positions.
148
With this approach we can reuse PAT Array algorithms for negative doublet recognition of trees, i.e. delta tree detection. Of course, the administrative data integrated in the textual content must not be part of any visualized search result. Appropriate measures can suppress these occurrences.
Enterprise Tomography
Figure 13. Delta tree determination
enterprise tomograph Index organization In large-scaled VLBA development and maintenance projects numerous organizational entities on different granularity levels are involved. Entities are potentially performing changes on the VLBA data universe. The aim is to keep the index upto-date without complete re-indexing after each change made. We assume that the project organization is hierarchical. Figure 14 reflects this situation. The root node represents the global index. Each development location may redefine subsets of the global index. A user for example works in a user-specific workspace. He may want to redefine a subset of the index grid of his development team at a different point of time. With the Enterprise Tomograph he can track the progress of his work (delta determination) in comparison to the snapshot on team level. As soon as he accepts the correctness of his changes, he activates user-specific index. In this case his index subset is updated along the path up to the root node. After this all predecessor entities can access the user specific changes. A lock mechanism during redefinition phase circumvents conflict resolution.
This collaborative index update policy ensures a consistent index without the need for re-indexing the whole VLBA data universe.
Generic operators in maintenance networks Enterprise software manufacturers normally provides maintenance services for their customers. They are granted access to their individual customers for detailed root cause analysis. In fact, the code bases of the individual deployed enterprise software may be modified or enhanced. Such individual changes may result in error symptoms. To make modifications and enhancements visible, the Inter-Delta Operator of the Enterprise Tomograph comes into place. E.g. enterprise software manufacturer SAP provides an encoded index of a domain of its Enterprise Platform (reference index) into a P2P network. The customer can calculate the delta of its changed enterprise software and the decoded reference index. In semantic genres the modifications and enhancements are listed. Now the maintainer semi-automatically can assess and estimate, if the error symptom relates to modifications and enhancements made by the customer.
149
Enterprise Tomography
Figure 14. Enterprise tomograph index organization
Another meaningful operator in maintenance networks is the align operator. Assume the enterprise software manufacturer wants to deploy a best practice. This best practice (delta) can be added to customers configuration. In this case, adding a delta is equivalent to aligning configuration in a domain. The align operator can also be applied for code bases. In this way updates of enterprise software can be provided. The locate operator performs concept location in integration ontologies. This operator can be iteratively be applied for refinement purposes. Based on the result other orthogonal operators can be applied e.g. multiple break-point operator, launch operator, modify operator. The locate operator is the basis for refactoring purposes. In the P2P network the sender of the encoded domain index is the owner of the key. Enterprises knowing this key are able to decrypt and use the domain index only. The domain index is divided into n fragments. These fragments are redundantly encoded and the resulting amount of encoded fragments m = n + k is distributed to the peers of the network. The retrieval of the encoded index happens according to an implementation of a distributed hash table (DHT).
150
With Reed Solomon Codes, retrieval of arbitrary n of m fragments suffices for reconstruction of the original domain index (Grolimund, 2007). The basics behind is the fundamental theorem of algebra (Korner, 2006). If a fragment is damaged, or not transmitted consistently, the retrieval of an additional encoded fragment may be triggered (n different consistent encoded fragments are necessary for reconstruction of the original domain index). The independent fragment retrieval approach ensures an efficient upload and download of large Enterprise Tomograph domain indices because n (n of m) arbitrary encoded fragments can be retrieved in parallel for reconstructing the original domain index (Grolimund 2007). In addition, with RS-Codes (Reed Solomon Codes) damaged or not consistently transmitted encoded fragments can be reconstructed with high reliability. Encoded fragments contains attached redundant information (polynom). With polynom division the correctness can be evaluated. With help of RS erasure codes damaged fragments can be repaired (Forward Error Correction). Figure 15 outlines the interplay of the generic operators in the P2P maintenance network.
Enterprise Tomography
Figure 15. Generic operators in maintenance network
relAted Works
conclusIon
Enterprise Tomograph with its delta operator has similarities to ontology versioning in ontology management frameworks (Noy & Musen, 2004). Integration concept location is related to identifying cross-cutting concerns in enterprise software. Ontology mining relates to static and dynamic aspect mining in a more abstract sense (Krinke, 2006). Techniques used in (Hollingsworth & Williams, 2005) can be seen as a special case of Enterprise Tomography. Semantic Integration in VLBAs is covered in (Brehm & Haak, 2008). In Enterprise Tomography integration ontologies are derived from VLBAs. Integration ontologies as an intermediate dynamic model are used for ontology integration. Ontology concepts are also used as model for program comprehension in combination with technical unrelated artifacts (Panchenko, 2007).
Enterprise Tomography is an efficient approach for Application Lifecycle Management of VLBAs and Enterprise Platforms. It supports development and maintenance teams to track incremental changes of integration concepts in VLBA software, business data and contextual data. Full domain-specific search of integration concepts, refinement search and delta search and its visualization is realized with enhanced standard scientific algorithms. Interchangeable enhanced PAT Arrays, PAT array based delta tree recognition, tree distance algorithms and tree mappings based on dynamic programming are the main algorithms used in Enterprise Tomography. Complexity calculation for Enterprise Tomography can easily be derived from complexity of the individual involved enhanced standard algorithms. According to VLBA definition, the Enterprise Tomograph with its generic operators in P2P maintenance networks can be regarded as a VLBA. The business process behind is maintenance service provisioning for deployed enterprise software.
151
Enterprise Tomography
reFerences Abdullah, R., Tiun, S., & Kong, T. E. (2001). Automatic topic identification using ontology hierarchy. In Proceedings, Computational Linguistic and Intelligent Text Processing, Second International Conference CICLing, Mexico City, Mexico (pp. 444-453). Berlin, Germany: Springer. Abels, S., Haak, L., & Hahn, A. (2005). Identification of common methods used for ontology integration tasks. Interoperability of heterogeneous information systems. In Proceedings of the first international workshop on Interoperability of heterogeneous information systems, Bremen, Germany (pp. 75-78). New York: ACM. Abouelhoda, M. I., & Kurtz, S. (2005). Replacing suffix trees with enhanced suffix arrays. Journal of Discrete Algorithms, 2, 53–86. doi:10.1016/ S1570-8667(03)00065-0 Bille, P. (2005). A survey on tree edit distance and related problems. Theoretical Computer Science, 337(1-3), 217–239. doi:10.1016/j. tcs.2004.12.030 Brehm, N., & Haak, L. (2008). Ontologies supporting VLBAs: Semantic integration in the context of FERP. In Proceedings of the 3rd International Conference on Information and Communication Technologies: From Theory To Applications, ICTTA 2008, (pp. 1-5). Burkhardt, S., Kärkkäinen, J., & Sanders, P. (2006). Linear work suffix array construction. [JACM]. Journal of the ACM, 53(6), 918–936. doi:10.1145/1217856.1217858 Dementiev, R. (2006). Algorithm engineering for large data sets. Unpublished doctoral dissertation, University of Saarland, Saarbrücken. Grabski, B. Günther, S., Herden, S., Krüger, L., Rautenstrauch, C., & Zwanziger, A. (2007). Very large business applications. Berlin, Germany: Springer.
152
Grolimund, D. (2007). Wuala - a distributed file system. Caleido AG, ETH Zuerich. Online Publication in Google Research, Google TechTalks. Hollingsworth, J. K., & Williams, C. C. (2005). Automatic mining of source code repositories to improve bug finding techniques. IEEE Software Engineering, 31(6), 466–480. doi:10.1109/ TSE.2005.63 Korner, T. E. (2006). On the fundamental theorem of algebra. Journal Storage, 113(4), 347–348. Krinke, J. (2006). Mining control flow graph from crosscutting concerns. In Proceedings of the 13th Working Conference on Reverse Engineering (WCRE): IEEE International Astrenet Aspect Analysis (AAA) Workshop, Benevento, Italy (pp. 334-342). Lu, C. L., Su, Z.-Y., & Tang, C.-Y. (2001). A new measure of edit distance between labeled trees. In Proceedings of the Computing and Combinatorics, 7th Annual International Conference, Cocoon 2001, Guilin, China (pp. 338-348). Ludewig, J., & Opferkuch, St. (2004). Softwarewartung - eine taxonomie. Softwaretechnik-Trends, Band 24 Heft 2, Gesellschaft für Informatik. Nicklas, D. (2005). Ein umfassendes umgebungsmodell als integrationsstrategie für ortsbezogene daten und dienste. Unpublished doctoral dissertation, University Stuttgart, Online Publication, Stuttgart. Noy, N. F., & Musen, M. A. (2004). Ontology versioning in an ontology management framework. IEEE Intelligent Systems, 19(4), 6–13. doi:10.1109/MIS.2004.33 Opferkuch, St. (2004). Software-wartungsprozesse - ein einblick in die industrie. Fachbericht Informatik, Nr. 11/2004, Universität KoblenzLandau.
Enterprise Tomography
Panchenko, O. (2007). Concept location and program comprehension in service-oriented software. In Proceedings of the IEEE 23rd International Conference on Software Maintenance: Doctoral Symposium, ICSM, Paris, France (pp. 513–514).
Shasha, D., Statman, R., & Zhang, K. (1992). On the editing distance between unordered labeled trees. Information Processing Letters, 42, 133–139. doi:10.1016/0020-0190(92)90136-J
This work was previously published in Social, Managerial, and Organizational Dimensions of Enterprise Information Systems, edited by Maria Manuela Cruz-Cunha, pp. 232-251, copyright 2010 by Information Science Reference (an imprint of IGI Global).
153
154
Chapter 1.11
Enterprise Information System Security: A Life-Cycle Approach Chandan Mazumdar Jadavpur University, India Mridul Sankar Barik Jadavpur University, India Anirban Sengupta Jadavpur University, India
AbstrAct There has been an unprecedented thrust in employing Computers and Communication technologies in all walks of life. The systems enabled by Information Technology are becoming more and more complex resulting in various threats and vulnerabilities. The security properties, like confidentiality, integrity, and availability, are becoming more and more difficult to protect. In this chapter, a life-cycle approach to achieve and maintain security of enterprises has been proposed. First, enterprise information systems are looked at in detail. Then, the need for enterprise information system security and problems associated with security implementation are discussed. The authors DOI: 10.4018/978-1-60566-132-2.ch007
consider enterprise information system security as a management issue and detail the information security parameters. Finally, the proposed security engineering life-cycle is described in detail, which includes, Security Requirement Analysis, Security Policy Formulation, Security Infrastructure Advisory Generation, Security Testing and Validation, and Review and Monitoring phases.
IntroductIon There has been unprecedented thrust in employing Computers and Communication technologies in all walks of life including business, education and governance in almost all the countries. This is a one-way trend in the sense that there is no going back. While, this means lower cost, operational
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Enterprise Information System Security
efficiency and client satisfaction, on the flip side, the systems enabled by Information Technology are becoming more and more complex resulting in various vulnerabilities. Also, there are innumerable threats which exploit those vulnerabilities. More importantly, due to global connectivity through the Internet, threats are not confined to a particular area or region; they are omnipresent. These threats pose problems to the economic and administrative independence. In the context of Information Security, as our information infrastructures are becoming more and more complex, and connected, the properties like confidentiality, integrity and availability are becoming more and more difficult to protect/ achieve. The adoption of Information Technology Act in different countries provides the legal framework for acceptance of electronic documents in business and governance, as well as to deter the wrong-doers. Also, the international community is adopting Standards such as ISO 27001 (Snare, 2005) and ISO 17799 (Plate, 2005) for best practices in security management. All these standards have evolved from the knowledge, experience and expertise of international experts. It has been recognized that the security of enterprises has to be tackled from the point of view of a management structure than from purely technological angle. The rest of this chapter is organized as follows. Section 2 presents some background information and defines an enterprise and its functionality. It also details enterprise information systems and discusses the need for enterprise information system security. Section 3 lists the issues, controversies and problems associated with security implementation. It describes enterprise information system security as a management issue and details the information security parameters. Section 4 describes our proposed security engineering life-cycle. Section 5 lists few future research areas in enterprise information system security. Finally, our conclusions are in section 6.
bAckGround enterprise and its Functionality The Compact Oxford English Dictionary (Weiner, 1991) defines an “enterprise” as “a project or undertaking, especially a bold one”; “bold resourcefulness”; or, “a business or company”. Webopedia states that an enterprise is “a business organization. In the computer industry, the term is often used to describe any large organization that utilizes computers”. Combining these, we define an enterprise as an organization (Industry/ Govt./Academic) created for business or service ventures. From the Information Security point of view, an enterprise is characterized by its business goals, business activities, organizational structure, and assets and infrastructure. The Compact Oxford English Dictionary (Weiner, 1991) defines “information” as “facts or knowledge provided or learned; what is conveyed or represented by a particular sequence of symbols, impulses, etc”. The Wikipedia entry for information is “the result of processing, manipulating and organizing data in a way that adds to the knowledge of the receiver”. Thus, information can be viewed as data that is organized and accessible in a coherent and meaningful manner. The generation and use of information has some commonalities in different types of enterprises. For example, all of them rely on user and operator interactions, reliable storage and retrieval, correct processing, as well as timely and good quality dissemination of information. More and more enterprises are becoming dependent on the efficiency and quality of generation and processing of information. Information has become the prime mover in the growth and sustenance of all kinds of enterprises. Information and the technology supporting the creation, and management of information act as important assets of any enterprise. Thus there is a specific need to protect these assets.
155
Enterprise Information System Security
enterprise Information system Various information resources which are building blocks of any enterprise information system are typically classified into five categories (not necessarily disjoint): containers, transporters, sensors, recorders, and processors (Denning, 1999). Containers are physical media that hold information. Examples include human memories, computer memories, print media, tapes, disks, and containers for the above. Objects and communication systems that carry information from one location to another are Transporters. Examples include human messengers, who carry the information whenever they move from one place to another; physical transportation systems, i.e. trucks, planes, and postal systems; point-to-point telecommunications systems; broadcast media; and computer networks. Sensors extract information from other objects and the environment. Examples are human sensors, cameras, microphones, scanners, and radar. Devices like printers, tape recorders, and disk writers that place information in containers are called Recorders. Finally, objects that manipulate information are information processors. Examples include humans, computer hardware, and software. All these resources in an information system work in unison enabling information flow from one container to another and over a variety of transporters. Sensors extract information from the physical environment, which is then computerized, processed, broadcast over radio and television, transmitted over telecommunications systems, computer networks, and fed into devices that control processes in the environment such as heating and cooling. This interconnectivity of resources also paves the way for hackers to initiate operations that affect resources other than those explicitly hit.
156
need for enterprise Information system security To get a feeling of the urgent need for information system security in the present day scenario, we have to look back at the age of purely paper-based enterprises. In those days, security of valuable information was primarily provided by means of physical and administrative means. But the widespread use of sophisticated data processing equipments in day-to-day enterprise activities necessitated a face change for enterprise information security. This also became evident with the introduction of distributed transaction systems such as railway reservation systems, ATM networks, enterprise WAN etc. Here the main concern is the security of data during transmission over private or public network (Hassler, 2001). Documents play a key role in most of the human activities ranging from commerce, foreign policy, military action, and personal interactions etc. and success in most of the cases depend on all the parties involved, having confidence over the integrity of the documents involved. Documents typically carry signatures and dates which require protection from disclosure, tampering, or destruction. With information systems becoming ever more pervasive and essential to our day-today activities, electronic information is taking over many of the roles traditionally performed by paper documents. These call for efficient technologies for electronic documents to replace functions traditionally performed with paper-based documents. To survive in the present day competitive business scenario, enterprises are often driven to provide their services online. But security of online transactions becomes a major issue for enterprises which need to take such a decision. Also the security requirements of individual transactions vary widely from one type to another. For example, a million-dollar commercial construction loan being approved by all the parties involved has a much
Enterprise Information System Security
stronger security requirement than basic consumer transactions such as purchasing a CD or a customer service request. Additionally, confidential documents such as financial or health records when put online, requires enhanced security to ensure personal privacy. In the light of the above discussion, we are now in a position to answer the question: “Why is information security important?”, as it is crucial for enterprises to define why they want to achieve information security, in order to determine how they will achieve it. Information security is important for the following reasons: •
•
To protect enterprise assets: This is one of our primary goals. By “assets” we mean the hardware, software, and information assets of an enterprise. Hardware assets (Casar, 2004) consist of computing systems, communication links, and IT sites (or, rooms) in which computing systems are installed. Software assets are the automated and manual applications that process information. Information asset pertains to input, processed, output and stored data. To gain a competitive advantage: Developing and maintaining effective security measures can provide an enterprise with a competitive advantage over its peers. Information security is particularly important in the area of e-commerce and financial services provided over the Internet. It can mean the difference between wide acceptance of a service and a mediocre customer response. For example, how many people do you know who would use a bank’s Internet banking system if they knew that the system has been successfully hacked in the past? Not many. They would rather go to the competitors for their Internet banking services. Enterprise security practices instill confidence in customers and employees. This, in turn, makes potential customers’ decision of doing
•
business with an enterprise easier, knowing that best security practices are adopted by the enterprise. To comply with regulatory requirements and fiduciary responsibilities: Information Security officers of every enterprise have a responsibility to ensure the safety and soundness of the enterprise, which also includes the continuing operation of the enterprise. As a result, enterprises that rely on computing elements for their business must develop policies and procedures that address not only the protection of information assets but also protection of the enterprise from liability. For example financial organizations must protect shareholders’ investments and maximize return. In addition, organizations are often subject to government regulations, which generally stipulate safety and security requirements of an organization. Severe penalty is imposed on those enterprises which fail to comply with regulations. In some cases, corporate officers who have not properly performed their regulatory and fiduciary responsibilities are personally liable for any losses incurred by the financial institution that employ them.
Issues, controversIes And Problems oF enterPrIse InFormAtIon system securIty Present scenario Information infrastructure consists of technologies for gathering, handling, and sharing information. As the dependence on information infrastructure is significantly increasing there are also rising cases of vulnerability breaches, threats, cyber attacks and cascading or pervasive failures. They are caused due to lack of inherent security in new technologies and protocols, flaws in commonly used products, failures to address security con-
157
Enterprise Information System Security
cerns in system design and use, lack of management commitment and approach, and lack of user awareness about security. Organizations such as banking, finance, telecommunication, transportation and energy have infrastructures that are highly interdependent. Attacks on one infrastructure can damage other infrastructure as well. Concentrated infrastructure attacks could thus have a significant effect on our national security and economy. A look at the information security requirements of organizations reveal that the amount of IT assets, employees and clients using those assets, nature of their use and geographic distribution of all these are huge. Even for a medium sized enterprise, the task of security management is too complex. Access to critical assets must be controlled after considering their specific security needs (like confidentiality, integrity, and availability needs, as discussed later in this chapter). Other factors like the dynamic nature of the security requirement and business context, combined with the technological infrastructure, patch management, and security testing, complicate matters. For years, network security was based upon three primary products: firewalls, Vitual Private Networks (VPNs) and anti-virus software, but this security triad has reached its limit. Automated Internet worms, viruses, and Distributed Denial of Service (DDOS) attacks are more prevalent and virulent than ever before causing billions of dollars in worldwide damage and impacting companies like Bank of America, Continental Airlines, eBay, and Yahoo. Also, new technologies like IP telephony, WLANs, and Instant Messaging are gaining rapid acceptance opening up another potential avenue for attacks.
the Future For the past 10 years, security managers have focused their efforts on the network perimeter with the belief that all external users are ‘untrusted’ while internal users are trusted. This kind of
158
approach to security is no longer sufficient. The reasons are: •
•
•
The network perimeter is no longer static; remote users, contractors, partners, suppliers, even web services-based applications are all allowed network access regardless of physical location. Automated attacks often enter the network through legal TCP ports before creating havoc on internal systems. Many attacks come from disgruntled employees, not outside hackers. According to the CSI/FBI 2003 survey (Richardson, 2003), 77% of companies claim that employees are the most likely to commit security crimes.
As business requirements blur the perimeter between internal and external users, network availability is more important than ever. Thus worm and virus-driven interruptions must be minimized and contained. To do so, security managers are contemplating to apply new technologies to segment and protect internal networks including end-point security, network behavior modeling, Intrusion Detection/Prevention Systems (IDS/ IPS), and internal firewalls. Today’s reactive network security strategy brings heartburn to security staff while leaving enterprises vulnerable to attacks. Fortunately, this situation faces extinction. Business executives now realize that security is an important and necessary component of their overall business infrastructure and must be comprehensive and sound. Over the next few years, companies will invest heavily in network security technology while they formalize policies and improve procedures. At the same time, security features will aggregate and integrate into network hardware and management software. These efforts will help companies establish an enterprise security view while lowering operating costs – a major bottleneck of security technology today.
Enterprise Information System Security
enterprise Information system security is a management Issue Enterprise information system security is not absolute. All security measures are relative. Information security should be thought of as a spectrum that runs from very insecure to very secure. The level of security for an information system is dependent on where it lands along that spectrum relative to that point. There is no such thing as an absolutely secure information system. Information security is a balancing act that requires the deployment of “proportionate defense”. The defenses that are deployed or implemented should be proportionate to the threat. Enterprises determine what is appropriate in several ways like balancing the cost of security against the value of the assets they are protecting; balancing the probable against the possible; balancing business needs against security needs. Enterprises must determine how much it would cost to have each system or network compromised – in other words, how much it would lose in monetary terms owing to un-authorized access to the system or information theft. By assigning a monetary value to the cost of having a system or network compromised, enterprises can determine the upper limit they should be willing to pay to protect their systems. For many enterprises this exercise is not necessary, because information systems are lifeblood of the business. Without them there is no business. Enterprises also need to balance the cost of security against the cost of a security breach. Generally, as the investment in security increases, the expected losses should decrease. Companies should invest no more in security than the value of the assets they are protecting. This is where cost benefit analysis comes into play. Moreover, enterprises must balance possible threats against probable threats, as it is not possible to defend against every possible type of attack. Therefore it is necessary to determine what types of threats or attacks have the greatest probability
of occurring and then protect against them. For example, it is possible that an organization could be subjected to a van Eck monitoring or a highenergy radio frequency (HERF) attack, but the probability is low. It is also important to balance business needs with the need for security, assessing the operational impact of implementing security measures. Security measures and procedures that interfere with the operation of an organization are of little value. Those types of measures are usually ignored or circumvented by company personnel, so they tend to create, rather than plug, security holes. Whenever possible, security measures should complement the operational and business needs of an enterprise. In other words, enterprise information system security is a management issue where, a fine balance is to be drawn between system efficiency, security expenses and information protection.
enterprise Information system security Parameters The goal of enterprise information system security is to “enable an organization to meet all of its mission/business objectives by implementing systems with due care consideration of IT-related risks to the organization, its partners and customers”. The security goal can be met through the following security parameters (Stoneburner, 2001).
Availability (of Systems and Data for Intended Use Only) Availability (Yu, 1990) is a requirement intended to assure that systems work promptly and service is not denied to authorized users. This objective protects against: •
•
Intentional or accidental attempts to either perform unauthorized deletion of data or otherwise cause a denial of service or data. Attempts to use system or data for unauthorized purposes.
159
Enterprise Information System Security
Availability is frequently an organization’s foremost security parameter.
Integrity (of System and Data) Integrity (Biba, 1975; Clark, 1989; Goguen, 1982 ; Mayfield, 1991) has two facets: •
•
Data integrity (the property that data has not been altered in an unauthorized manner while in storage, during processing, or while in transit) or System integrity (the quality that a system has when performing the intended function in an unimpaired manner, free from unauthorized manipulation).
Integrity is commonly an organization’s most important security parameter after availability.
Confidentiality (of Data and System Information) Confidentiality (Bell, 1976; Lampson, 1974) is the requirement that private or confidential information not be disclosed to unauthorized individuals. Confidentiality protection applies to data in storage, during processing, and while in transit. For many organizations, confidentiality is frequently behind availability and integrity in terms of importance. Yet for some systems and for specific types of data in most systems (e.g., authenticators), confidentiality is extremely important.
Accountability (to the Individual Level) Accountability is the requirement that actions of an entity may be traced uniquely to that entity. Accountability is often an organizational policy requirement and directly supports non-repudiation, deterrence, fault isolation, intrusion detection and prevention, and after-action recovery and legal action.
160
Assurance (That the Other Four Parameters Have Been Adequately Met) Assurance is the basis for confidence that the security measures, both technical and operational, work as intended to protect the system and the information it processes. The other four security parameters (integrity, availability, confidentiality, and accountability) have been adequately met by a specific implementation when: • • •
Required functionality is present and correctly implemented, There is sufficient protection against unintentional errors (by users or software), and There is sufficient resistance to intentional penetration or by-pass.
Assurance is essential; without it the other parameters are not met. However, assurance is a continuum; the amount of assurance needed varies between systems. The five security parameters are interdependent (Stoneburner, 2001). Achieving one parameter without consideration of the others is seldom possible. This is depicted in Figure 1. Confidentiality is dependent on Integrity, in that if the integrity of the system is lost, then there is no longer a reasonable expectation that the confidentiality mechanisms are still valid. Integrity is dependent on Confidentiality, in that if the confidentiality of certain information is lost (e.g., the superuser password), then the integrity mechanisms are likely to be by-passed. Availability and Accountability are dependent on Confidentiality and Integrity, in that: •
If confidentiality is lost for certain information (e.g., super-user password), the mechanisms implementing these parameters are easily by-passable; and
Enterprise Information System Security
Figure 1. Security parameter dependencies
•
If system integrity is lost, then confidence in the validity of the mechanisms implementing these parameters is also lost.
All of these parameters are interdependent with Assurance. When designing a system, an architect or engineer establishes an assurance level as a target. This target is achieved by both defining and meeting the functionality requirements in each of the other four parameters and doing so with sufficient “quality”. Assurance highlights the fact that for a system to be secure, it must not only provide the intended functionality, but also ensure that undesired actions do not occur.
supporting services Supporting services (Stoneburner, 2001) are, by their very nature, pervasive and inter-related with many other services. The supporting services are: •
•
security services Security services (Stoneburner, 2001) used in implementing an information technology security capability can be classified into following three broad categories. •
• •
Support: Comprises mainly of generic services which underlie most information technology security capabilities. Prevent: Comprises of services which prevents security breaches from occurring Recover: Comprises of services which focus on the detection and recovery from a security breach.
•
•
Identification (and naming): This service helps in uniquely identifying users, processes, and information resources. Typically this service in turn is used by other services to identify subjects and objects. Cryptographic key management: This service enables secure key management which plays a key role in implementing other services where cryptographic functions are used. Security administration: This service enables efficient administration of security features of an information system to meet the security requirements of a specific installation and also to account for constantly evolving operational environment. System protections: This service is based on the notion of confidence in the technical implementation of various security functional capabilities. The quality of the implementation is measured from both the perspective of the design processes used and the specific manner in which the implementation was accomplished. Examples of system protections are: residual infor-
161
Enterprise Information System Security
mation protection, least privilege, process separation, modularity, layering, and minimization of what needs to be trusted.
Prevention services These services (Stoneburner, 2001) aim at preventing the security breaches from ever happening. •
• •
•
•
•
162
Protected communications: In a distributed system, the ability to accomplish security objectives is highly dependent on secure communications which helps in achieving integrity, availability, and confidentiality of information while in transit. Authentication: This service helps in ensuring that a claimed identity is valid. Authorization: This service enables specification and subsequent management of the allowed actions of a given information system. Access control enforcement: This service enables enforcement of defined security policy of the enterprise, when any subject requesting access to particular object has been validated. The level of security obtained in any information system depends not only on the correctness of the access control decision, but also the strength of the access control enforcement. A common access control enforcement mechanism is to check identity and requested access against access control lists. File encryption is another example of an access control enforcement mechanism. Non-repudiation: This service enables the ability to ensure that senders cannot deny sending information and that receivers cannot deny receiving it. This service is treated as a prevention service because it prevents the ability to successfully repudiate an action. Transaction privacy: This service helps in maintaining the privacy of individuals
transacting with information systems. This has become an essential requirement for enterprises both in government and private sectors.
detection and recovery services Because no system can be made absolute secure, it is necessary to both detect security breaches and to minimize their impact (Stoneburner, 2001). •
•
•
•
Audit: This service enables auditing of security relevant events for intrusion forensics and recovery from security breaches. Intrusion detection and containment: This service provides ability to timely detect signs of intrusions into information systems and also to take counter measures to confine it and minimize the effect. Proof of Wholeness: This service provides the ability to determine that integrity of an information system has been compromised. Restore ‘secure’ state: This service provides a way to restore the system state to a known secure state in the event of a security breach.
solutIons And recommendAtIons enterprise Information security engineering lifecycle Information Security (IS) of Enterprises depends on several factors. The major determinant is the Business Goal; besides, the operation context, technology used, organizational structures and connectivity also play important roles in determining the approach towards IS. The other important concern is that the IS need of an Enterprise is not static, but depends on the dynamics of operation, changing business goals, technological upgrades, etc. The process of developing and deploying a
Enterprise Information System Security
proper IS infrastructure for an Enterprise is not a one-time affair; rather it is a continuous process of analysis, design, monitoring and adaptation to changing needs. In many enterprises, the changes encountered are frequent. This calls for structured representation of the different specification documents and automatic analysis and generation of them with interoperable representation.
This leads to the concept of Information Security Engineering Life-Cycle, or simply “Security Engg. Life-Cycle” (Anderson, 2001; Sengupta, 2005). Following phases are considered to be of prime importance for this Life-cycle concept (refer Figure 2).
Figure 2. Security Engineering life cycle
163
Enterprise Information System Security
security requirement Analysis The very first step in building up a (somewhat) secure system is to understand and correctly specify the requirement. In this phase, information regarding the business modalities and IT assets of an enterprise are extracted. The categories of information extracted from the enterprise are: • •
• •
The modes of activity of the business. The assets including the information asset, software asset, hardware asset and service asset and their security requirements visà-vis confidentiality, integrity, availability, authentication, non-repudiation, legal and contractual obligation and the perceived security threats to the assets. The connectivity details. The access control requirements on the basis of “need-to-know” and “need-to-use”.
Based the on the above information provided by the enterprise, an analysis is performed to identify the required security controls (this includes technical as well as managerial controls, as suggested by security standards such as ISO 17799 (Plate, 2005)). An overall Risk Analysis (Peltier, 2005) is also performed to categorize the assets into high, medium and low risk zones for implementation of the security controls. During risk analysis, an attempt is made to identify the threats to which an IT system is exposed due to existing security weaknesses. The probability of each of these threats occurring is then estimated and combined with the protection requirements to rate the existing risks. For any risks which are unacceptable a set of IT security measures is then selected so as to reduce the probability of occurrence and/or the extent of the potential damage. The OCTAVE Method (Alberts, 2001) and the TEN STEP Process are two well-known risk analysis methodologies. The reports generated in this phase are:
164
• •
A structured Security specification format The Risk Analyses Report
security Policy Formulation Phase The goal of the security policy is to translate, clarify and communicate management’s position on security as defined in high-level security principles. A security policy manual (Peltier, 1999) is a set of high level statements describing the objectives, beliefs and goals of the organization as far as security is concerned, but not how these solutions are engineered and implemented. Security policies define the overall security and risk control objectives that an organization endorses. The policy should be implement-able and enforceable; they should assign responsibility and accountability. They must be documented, distributed, and communicated. They should be technology-independent as much as possible and should not change often. In this phase, the Security requirement Specification document is parsed to determine the security needs of an enterprise. Based on these security needs, a model policy manual is generated which, should conform to a particular information security standard like ISO 17799 (Plate, 2005). Guidelines and procedures, required to implement the policies, are also generated. Procedures consist of detailed steps to implement the policies, while guidelines are optional guidance provided to an enterprise to help in the implementation of policies and procedures. The model policy document should be analyzed by the management in conjunction with the non-security enterprise policies in place to finalize the Security Policy document. The reports generated in this phase are: • • •
The policy manual The guideline manual The procedures manual
Enterprise Information System Security
security Infrastructure Advisory Phase Security infrastructure advisory is a specified set of entities, both physical as well as software, in order to implement the set of identified security controls. It tells an enterprise the details regarding the software necessary, the access rights of individuals, the exact location of security tools, etc., required to mitigate the security risks of the enterprise. This phase will parse the security requirement specification document to identify the security needs of an enterprise and generate a security infrastructure advisory. The advisory will list the security tools required by the organization in order to protect its assets. The exact installation location and configuration advice are also included in the advisory. The report generated in this phase is •
security testing and validation The main objective of security testing is to ascertain that the proposed security infrastructure is in place and working. Other than this, the system can be tested for faulty combination of software, known security holes and potentially dangerous applications that can be compromised to breach the security infrastructure. An integrated security testing tool may be visualized to automate the objective of security testing as described above. The security testing phase can be further divided into several sub-phases as described below. •
The security infrastructure advisory
Some information security standards suggest probable security tools which are to be installed in an enterprise in order to protect its assets. For instance, ISO 17799 (Plate, 2005) consists of security controls which detail the types of assets and corresponding security parameters (confidentiality, integrity, etc.) to be protected, and the security tools needed for the same. After getting the security infrastructure specification, a cost-benefit and detailed risk analysis is performed by the management of the enterprise concerned. Based on this, the enterprise decides on the particular infrastructure that it would like to implement. This is stated in the form of a security infrastructure selection document. This selection will take into account all kinds of Policies of the enterprise. Finally, the enterprise implements the security infrastructure as decided by it.
•
•
Test Case Generation - A test case can be viewed as a collection of information based on which the security test is conducted. The preparation of a test case requires input from a requirement specification, valid security infrastructure selection, database (vulnerability database) containing information on known security holes and inputs from the security analyst. The test plan would be generated in a domain-specific format. Test cases are generated from the test plan templates. Compliance Test - In this sub-phase the security test tool determines whether the specified security infrastructure is in place with proper configuration. The security analyst would have the provision on deciding whether to proceed with the later sub-phases of the security test if the security infrastructure is found to be not implemented, partially implemented or fully implemented. Common Vulnerability Test - After testing the existence of the security infrastructure in a corresponding domain, several tests would be conducted to find out the presence of known vulnerabilities that may be used to compromise the system even in the presence of a security infrastructure.
165
Enterprise Information System Security
•
• • •
Validation Report Generation - It is the last sub-phase in the security test mechanism. It is used to generate a validation report for the security requirement in the form of compliance and gaps in the implemented system. This is carried out by applying the Risk Analysis methodology on the security infrastructure in place, to see how far the risks are mitigated, thus, justifying the ROI. The reports generated in this phase are: Compliance test report Vulnerability test report Validation report
review and monitoring The different phases of activities may have to be monitored and the reports may have to be reviewed. The Monitoring process is continuous and may need certain tools and procedures to generate monitoring reports. The Review process involves the review of the requirement, policy, infrastructure and testing methodologies, so that the different entities do not become outdated and, consequently, create holes in the security framework. The review may be triggered by changes in the organizational structure, business goals and activity, technology, legal obligations, etc. Even in the absence of such triggers, review may be done periodically, the period being fixed by a policy. The other important thing triggering a review is the report of a security incident, which may affect, or have already affected, the enterprise security. The outcome of the Review and Monitoring may re-initiate the activities of the Requirement analysis, Policy Formulation, Risk Analysis, Infrastructure updation and Testing & Validation phases. This is how the Security Engg. Life-cycle is expected to work.
166
Future trends The future trend in Enterprise Information Security approach is to overlay the Plan-Do-Check-Act (PDCA) Cycle on the Security Engineering Lifecycle. In fact the various Information Security Management Standards are trying to spell out the activities to be taken up in different phases of the PDCA Cycle and their outcome. A major paradigm shift is happening from the qualitative approach to the monitoring with quantitative measures. The quantitative measures are being proposed to judge the quality of security management solutions in a more objective manner. This would help also in justifying the ROI. As already mentioned, the technology trend is to integrate security functionalities with the operational functionalities of components and subsystems – both hardware and software. This would cater for allowing secure transactions only in order to secure rather open enterprises, where the boundaries are becoming fuzzier, because of the globalization of economy in the cyber space.
conclusIon This chapter looks at different aspects of Enterprise Information System Security. We began by defining an enterprise and its information systems and discussed the need for securing the same. Then the problems associated with the implementation of enterprise information security were discussed. The various security parameters, and their interdependencies, were stated. Finally, a management approach to securing an enterprise was proposed and a security engineering life-cycle methodology to achieve the same was discussed.
Enterprise Information System Security
AcknoWledGment The authors acknowledge the role of over a dozen students and researchers in the Centre for Distributed Computing, Jadavpur University, for their contributions over the years in shaping and clarifying the concepts presented in this chapter. Most of the work related to the concepts presented in this chapter has evolved during the execution of a couple of R&D Projects sponsored by the Dept. of IT, Ministry of Communications and IT, Govt. of India, and a number of consultancy projects taken up by the Centre.
reFerences Alberts, C., & Dorofee, A. (2001). An Introduction to the OCTAVE Method, Software Engineering Institute, Carnegie Mellon University. Retrieved October 11, 2001, from http://www.cert.org/ octave/methodintro.html. Anderson, R. (2001). Security Engineering: A Guide to Building Dependable Systems. New York, NY: John Wiley & Sons, Inc. Bell, D. E., & LaPadula, L. J. (1976). Secure Computer Systems: Unified Exposition and Multics Interpretation, ESD-TR-75-306, MTR 2997 Rev. 1. Bedford, Massachusetts: The MITRE Corporation.
Denning, D. E. (1999). Information Warfare and Security. Singapore: Addison Wesley. Goguen, J. A., & Meseguer, J. (1982). Security Policies and Security Models, Symposium on Security and Privacy, pp.11-20. Menlo Park, CA: IEEE. Hassler, V. (2001). Security Fundamentals for E-Commerce. Norwood, MA: Artech House Inc. Lampson, B. W. (1974). Protection. Proc. Fifth Princeton Symposium on Information Sciences and Systems. Princeton University, reprinted in Operating Systems Review, 8(1), 18-24. Palo Alto, CA. Mayfield, T., Roskos, J. E., Welke, S. R., & Boone, J. M. (1991). Integrity in Automated Information Systems. Prepared for National Computer Security Center (NCSC). Alexandria, Virginia: NCSC. Peltier, T. R. (1999). Information Security Policies and Procedures. Boca Raton, Florida: Auerbach Publications. Peltier, T. R. (2005). Information Security Risk Analysis. Boca Raton, Florida: Auerbach Publications. Plate, A., & Weissmann, O. (Eds.). (2005). Information Technology – Code of practice for information security management, ISO/IEC 17799:2005. Berlin, Germany: ISO.
Biba, K. J. (1975). Integrity Considerations for Secure Computer Systems, MTR-3153. Bedford, MA: MITRE Corporation.
Richardson, R. (2003). CSI/FBI Computer Crime and Security Survey. Hayward, CA: Computer Security Institute.
Casar, E., Scheer-Gumm, G., & Simons-Felwor, P. (2004). IT Baseline Protection Manual. Germany: BSI.
Sengupta, A., Mukhopadhyay, A., Ray, K., Roy, A. G., Aich, D., Barik, M. S., & Mazumdar, C. (2005). A Web-Enabled Enterprise Security Management Framework Based on a Unified Model of Enterprise Information System Security: (An Ongoing Project Report), First International Conference on Information Systems Security, ICISS 2005. Kolkata, India, Pages 328 – 331: LNCS.
Clark, D., & Wilson, D. (1989). Evolution of a Model for Computer Integrity, in Report of the Invitational Workshop on Data Integrity, Gaithersburg, Maryland, A.2-1A.2-13. Gaithersburg, MD, National Institute of Standards and Technology, NIST Special Publication 500-168: NIST.
167
Enterprise Information System Security
Snare, J., & Kuiper, E. (Eds.). (2005). Information technology - Security techniques – Information security management systems - Requirements, BS ISO/IEC 27001:2005 BS 7799-2:2005. Geneva, Switzerland: ISO. Stoneburner, G. (2001). Underlying Technical Models for Information Technology Security. NIST Special Publication 800-33. Gaithersburg, MD: NIST. Weiner, E. S. C., & Simpson, J. A. (Eds.). (1991). The Compact Edition of The Oxford English Dictionary. USA: Oxford University Press. Yu, C.-F., & Gligor, V. D. (1990). A Specification and Verification Method for Preventing Denial of Service. [IEEE.]. IEEE Transactions on Software Engineering, 16(6), 581–592. doi:10.1109/32.55087
key terms And deFInItIons Asset: Asset means anything that has value to an organization. With respect to security, asset may imply physical resources, or information contained within the organization. Information Asset: Databases, data files, system documentation, user manuals, training material, operational and support procedures, intellectual property, continuity plans, fallback arrangements, archived information. Hardware Asset: Computer equipment (processors, monitors, laptops, modems), communication equipment (routers, hubs, PABXs,
fax machines), magnetic media (tapes and disks), other equipment, cabinets, safes. Risk Analysis: This term defines the process of analyzing a target environment and the relationships of its risk related attributes. This analysis will identify threat-vulnerabilities, associate those vulnerabilities with affected assets, identify the potential for and nature of an undesirable result and specify risk mitigating controls. Risk Management: This includes the process of assigning priority to, budgeting, implementing and maintaining the appropriate risk-reducing measures. Safeguard: This term represents a risk reducing measure that acts to detect, prevent or minimize loss associated with the occurrence of a specified threat. Security Concern: Security concern of an asset is a function of threat and vulnerability of that asset. Severity: Level of exploitation of vulnerability on a qualitative scale is defined by the severity value. Software Asset: Application software, System software, development tools, and utilities. Threat: Threats are any unwanted activities or events that under certain conditions could jeopardize either the integrity, confidentiality or availability of information and other assets. Vulnerability: This is an inherent weakness associated with an enterprise asset. It is a condition that has the potential to allow a threat to occur with greater impact and greater frequency or both.
This work was previously published in Handbook of Research on Social and Organizational Liabilities in Information Security, edited by Manish Gupta and Raj Sharman, pp. 118-132, copyright 2009 by Information Science Reference (an imprint of IGI Global).
168
169
Chapter 1.12
From ERP to Enterprise ServiceOriented Architecture Valentin Nicolescu Technische Universität München, Germany Holger Wittges Technische Universität München, Germany Helmut Krcmar Technische Universität München, Germany
AbstrAct This chapter provides an overview of past and present development in technical platforms of ERP systems and its use in enterprises. Taking into consideration the two layers of application and technology, we present the classical scenario of an ERP system as a monolithic application block. As the demands of modern enterprise software cannot be met by this concept, the shift to a more flexible architecture like the serviceoriented architecture (SOA) is the current status quo of modern companies. Keeping in mind the administrative complexity of such structures, we will discuss the new idea of business Webs. The purpose of our chapter is, on the one hand, to show the historical development of ERP system
landscapes and, on the other hand, to show the comparison of the presented concepts with respect to application and technology view.
IntroductIon With the emergence of the SOA concept, the classical architecture of ERP system has started to change and is in a constant flux towards new structures. We want to show these changes, starting with the architecture of ERP systems and describing the different parts of this concept. To exemplify it, we will present the most important aspects of concrete implementations of these principles. As one of the most important ERP systems, we will focus on the structure of the SAP ERP system and will describe the changes of this platform.
DOI: 10.4018/978-1-59904-859-8.ch023
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
From ERP to Enterprise Service-Oriented Architecture
Our analysis will comprise of an applicationcentered and a technical view, considering changes in business paradigms and new technologies that enabled new kinds of business and process management. Starting at classical ERP systems and their implementation in SAP R/3, we will move on to the current concept of SOA and Enterprise SOA. This goes along with a change in technical architectures as well. The SAP NetWeaver platform will be presented as an example of complete Enterprise SOA platforms. Its most important functions will be pointed out and utilizing this example, components that are necessary to realize Enterprise SOA are identified. The light in which SAP NetWeaver is seen has changed in the last years as not the technical components are in the spotlight anymore but the applications that are made possible by such a platform. Finally, we will show the future concept of business webs which will base on Enterprise SOA and conclude our chapter with an outlook to the further development in this area of topic. The structure of our chapter is illustrated in Figure 1.
clAssIcAl erP systems Classical ERP system can be described as commercial software products that are adaptable to company-specific demands. Their typical functional modules include: purchasing, manufacturing, sales, finances, human resources, service and in general reporting. Classical ERP systems Figure 1. Structure of this chapter
170
focus on data integration and also support process integration within one company. The technical basis of an ERP system today is usually a Client/Server architecture, where often more than one application server is connected to the central database server. The user of a classical ERP system most time works with basic business transactions like “create order”, “update customer contact data”, “print invoice”, “execute report xy” etc. Changes within such a system due to business transactions are usually propagated in “business” realtime meaning a few seconds or minutes. The following figure from Davenport visualizes the architecture of a classical ERP system. In addition to the facts mentioned before, there is usually a wide range of reporting functionality for management and stakeholders, based on the central ERP data. There are a lot of advantages that unfold with the use of ERP Systems (Vogel and Kimbell 2005; N.N. 2007) In the following a few of them are spotlighted: • • • • •
Standardisation Integrated “best practice” business knowledge Data quality Data- and process integration Central authorisation, authentication
However, there are also a lot of potential risks related with the use of ERP-Systems.
From ERP to Enterprise Service-Oriented Architecture
• • • •
Possible single point of failure Problems integrating different ERP-Systems Tracking of complex processes – even if they are handled by only one ERP-System operated by personnel with inadequate education in ERP in general
Additional, considerable risks that need to be especially emphasized arise from deficiencies in the ERP introduction project. Example thereof could be: • • •
mistakes in the selection of the ERP System, misunderstandings/mistakes in the system customizing failures in the business / IT alignment
Despite the possible problems that served as expamples, ERP systems can be seen as something that Carr calls ubiquity information systems power (Carr 2003). Nowadays it is not the question of using an ERP System in your company, but how. There is also research available describing the value of ERP Systems more concretely. See for example (Martin, Mauterer et al., 2002).
sAP r/3 bAsIs For the discussion of the technology of classical ERP systems, we will take a closer look at SAP R/3 and the underlying technological platform SAP Basis. Since SAP’s application was the first enterprise software fulfilling the characteristics of an ERP system and still represents the leading edge in ERP technology, analyzing this product provides a complete outline of the past and present development in ERP system technology. SAP R/2 was the first enterprise application integrating functions of different enterprise divisions in one piece of software and thus representing the first ERP system. It provided functions for finance, accounting, human resources, sales, procurement, and manufacturing. The system ran on mainframe architectures, while its successor, R/3, relies on a client server architecture and also includes a whole set of new functions. (Vogel and Kimbell 2005). Because SAP R/3 for a long time was the ERP system, it shall serve as our starting point for the detailed analysis of the technology layer Moving from a mainframe structure to the client server architecture the technical part of the SAP application was divided in a 3-tier landscape consisting of the front end tier typically running on the user’s desktop, the application
Figure 2. Anatomy of an enterprise system (Davenport, 1998)
171
From ERP to Enterprise Service-Oriented Architecture
tier and the database tier. The front end tier is a fat client that displays all kinds of screens based on small information packages exchanged with the application tier. The application tier resides on high performance servers and represents the application logic and enterprise functionality as well as the connection to the database layer. The databases used for R/3 have been provided by different manufacturers and, with their database product SAPDB, also by SAP itself. By dividing the technical architecture into these three tiers, it became scalable and could be enhanced by adding more desktop computers or servers to the tier that lacks computational power. The technical layer of the application tier and thus of SAP R/3 systems is called SAP Basis. This layer’s characteristics of capital importance indclude platform and database independence with respect to the interpretation of application source code. The code is written in SAP’s programming language called Advanced Business Application Programming (ABAP). All of SAP’s out of the box functions can be viewed in plain text, edited (with some limitations) and hence also be enhanced. The integrated development environment called ABAP workbench can be used for these tasks. Providing these possibilities, the platform can be adopted in a very flexible way to individual needs. In practice however, only few modifications are made to the source code delivered by SAP because the algorithms represent the know-how of many business experts. Framed in other words, the implemented algorithms are procedures cast into best practices such as how to support a specific process at best. When using SAP R/3 for crucial functions, enterprises have a data centralization spanning the whole company. Nonetheless SAP R/3 turned out to be insufficient for the integration of suppliers and customers as well as for performing in-depth analysis of business data. Thus new applications like SAP Customer Relationship Management (CRM), Supplier Relationship Management (SCM) or the Business Warehouse (BW) were created to meet the growing demands. All these
172
new products were based on SAP Basis. This is why a highly integrated system landscape could be implemented despite the different systems. By using the same technical layer, all systems can communicate and exchange data very easily utilizing the same functions. These functions are arranged in a hierarchical structure defined by business areas and this way facilitate changes and enhancements of business processes. Furthermore, most functions are encapsulated in so called function modules which can be called remotely by another application like another SAP system or a third party product. Therefore communication with applications of third party vendors can be established within limitations. This packaging of functions is also an important step towards a service oriented architecture as will be shown later (Buck-Emden 1998). Although the SAP Basis of SAP R/3, as shortly presented above, was already quite flexible and enabled the communication to other applications to and from SAP or third party vendors, the degree of integration was limited. There was no complete integration of functions and data of different applications to create completely new business processes across system boundaries. Additionally, changing or enhancing crucial functions or processes was very difficult as the dependencies between systems and function modules were not clearly documented. A fundamental problem of this technical layer was also its integration into the application layer: SAP Basis could not be used without SAP R/3 or other products based on this very platform. These issues forced a fundamental change to provide a much more flexible technical layer for future business applications.
soA and enterprise soA We will rely on the following definition of a SOA: “SOA is an architectural style whose goal is to achieve loose coupling among interacting software agents. A service is a unit of work done by a service provider to achieve desired end results for a
From ERP to Enterprise Service-Oriented Architecture
service consumer. Both provider and consumer are roles played by software agents on behalf of their owners.“ (He 2003) In addition to this definition, SOA is characterized by intensive use of standards (like SOAP, UDDI, etc.) to implement services. But the concept of SOA itself is not connected to a technical or business domain. It is a concept of how functions can be implemented on distributed entities, just like the client-server concept or the idea of object orientation (OO). Both of the mentioned actually can be related to SOA. Services within a SOA could for example be implemented using an OO programming language. The result could be run technically in an environment that follows the client server concept. The central components of a SOA are shown in Figure 3. The central element within a SOA is the service. Whenever an application wants to use a service, it communicates with the service repository (for example via UDDI) using the (enterprise) service bus as technical backbone. Examples for service busses include IBM WebSphere ESB, Microsoft biztalk, Oracle ESB and SAP XI.. The application front end is used to access the network of services. A service itself consists of an implementation describing the business logic and data. Also it is formalized by a service contract (i.e. IDL, WSDL) and a service interface (i.e. SOAP) (Booth, Haas et al., 2004).
Key features of SOA as described by Erl (Erl 2005) are: •
•
•
•
Self-describing: The service can be fully described on a formal level; that is, what operations are available and which data types can be exchanged. Formal self-description is the basis for automatically generating proxies. Note that in this context the word “description” does not refer to the semantic description. Locatable: Potential consumers can locate and contact the service. There is a registry for this purpose that operates as a “Yellow Pages” of services and allows users to search for services. Roughly structured: Services are roughly structured when they return comprehensive data (for example, a sales order) in response to a service call instead of delivering individual attributes (such as an order quantity). Loosely linked: Services are said to be loosely linked if potential functional changes within the services have no effect on the availability of operations. Services are considered to be independent if they have no dependencies on other services i.e. if the availability of a service is not af-
Figure 3. The key abstractions of a SOA (Krafzig, Blanke et al., 2004)
173
From ERP to Enterprise Service-Oriented Architecture
•
fected by a potential unavailability or improper functioning of any other service. Reusable: Reusability is one of the most important and longest-standing requirements in software technology. It aims at ensuring that components are reused as often as possible without modification to the functional core or interfaces.
Enterprise SOA extends the features of the SOA concept with a business view on services. Therefore services additionally have the following key features: • •
Services represent business services instead of technical services Support of inter-organizational collaboration on a business level
The technical basis of an Enterprise SOA today is usually a client-server architecture, where many application server and many database servers are part of the - often regionally distributed - IT-landscape. In contrast to a list of basic ERP Transactions, Enterprise SOA offers access to high level business processes like “hire employee”, “produce good”, “distribute good” etc. Seen in this light one might define Enterprise SOA as the business glue connecting isolated business tasks, such as the previously mentioned examples, to comprehensive business processes. Working that way, Enterprise SOA opens the door to realtime business processes instead of realtime transactions.
dIFFerent vIeWs on sAP netWeAver The presented idea of service oriented architectures and its use in an enterprise domain forces the need for a technical platform that can enable the interaction of the different services. SAP developed the currently most complete technical layer for Enterprise SOA called SAP NetWeaver.
174
NetWeaver is comparable to the SAP R/3 Basis, however doess not itself provide any business functions but is the technical basis for enterprise applications like SAP Enterprise Resource Planning (ERP) (Woods and Word 2004). Since the components of this layer are already a few years old, the view on this platform has changed. First we will present the classical component based view covering the specific technical systems that are part of this platform. Afterwards we will show the shift from the component based to an application based view that does not care anymore about technical systems but focuses on the application of a set of systems for business purposes.
component based view on netWeaver The component based view on SAP NetWeaver is also called the “NetWeaver Fridge” because you can use specific components of the NetWeaver platform just as you can take ingredients from your fridge for lunch. This technical fridge consists of four layers representing different aspects of an enterprise system landscape as well as two cross-functional areas that are used for all other products. We can distinguish the layers for application platform, process integration, information integration and people integration. The general areas are life cycle management and SAP’s Composite Application Framework (CAF). The specific layers combine different software products of the SAP portfolio. Thus this view on NetWeaver is a rather technical one (Karch, Heilig et al., 2005). In Figure 4 you find an overview of all layers and their elements at which we want to have a closer look. The application platform consists of three elements whereas the DB and OS abstraction is an integral part of the two other elements. These two elements are the ABAP and the Java core of all SAP applications. They are called Web Application Server (Web AS) ABAP and Web AS Java. Depending on the application that is to be
From ERP to Enterprise Service-Oriented Architecture
Figure 4. Component based view SAP NetWeaver
executed on the Web AS, either one or both cores are used as a runtime environment. The SAP NetWeaver Portal for example needs only the Java core while SAP Process Integration uses both Java and ABAP. The Web AS can also be used for technical purposes like monitoring without enterprise applications building upon it (Heinemann and Rau 2003). The process integration layer represents the elements of an integration broker and of business process management which are combined in SAP’s product called Process Integration (PI, formerly Exchange Infrastructure). The integration broker’s task within NetWeaver is the exchange of data using different formats and protocols. It can for example convert messages sent via FTP in an XML format to a message for an SAP function module based on one of SAP’s internal protocols. Besides this technical conversion, also changes to the content can be processed such as converting the values of the gender (female, male) to values for a suitable salutation (Ms, Mr.). Business Process Management (BPM), also known as crosscomponent BPM (ccBPM), provides functions for workflow management across boundaries of single enterprise systems. It can help to imple-
ment automated processes by triggering and coordinating data exchange between all kinds of enterprise systems which can be accessed by the integration broker. A business process can therefore automatically manage the selection of the supplier with the best price after receiving all quotations and may subsequently place an order. NetWeaver ‘s process integration layer and the product of the same name can be regarded as the communication backbone of a modern enterprise system landscape. It provides centralized functions for data exchange within the landscape and with systems in supplier or customer domains. The information integration layer combines three tasks which are arranged in three different applications. Master Data Management (MDM) and Business Intelligence (BI) can be found in the products of the same name while Knowledge Management (KM) is part of the SAP NetWeaver Portal which is part of the subsequent layer. MDM helps to consolidate data within an enterprise across system boundaries. The same data is often saved in different places because of technical reasons. In many cases this redundancy can be changed so that the data is saved only in one place or at least synchronization between different databases can
175
From ERP to Enterprise Service-Oriented Architecture
be set up. Thus the same data is available in all connected systems. Using this correct data proper propped business decisions can be made. As the amount of data on which decisions are based on can be vast, SAP’s Business Intelligence helps to arrange and visualize data from different points of view. By using SAP’s Strategic Enterprise Management (SEM), balanced scorecards can be used as an enterprise dash board or management cockpit. While MDM and BI are used for structured data, KM components help to manage unstructured data like files containing different kinds of information. As KM is part of the NetWeaver Portal it can easily be integrated in the front end and therefore in daily work. In the layer of people integration, three different tasks are combined in 2 applications. On the one hand one can identify the multi channel access provided by SAP’s Mobile Infrastructure (MI). On the other hand there is SAP NetWeaver Portal including the tasks of portal, collaboration and knowledge management which form the information integration layer. This layer hence contains applications for end user communication. The Mobile Infrastructure provides a client server architecture that enables the synchronization of data on mobile devices like PDAs or Notebooks with the enterprise system landscape. This way traveling salesmen have all the data they need to help customers place an order. The portal itself provides a web based framework for arranging and displaying screens and information. Furthermore it integrates business process screens providing a single point of entry to front ends in the enterprise system landscape. It can be used to implement flexible wizards working across system boundaries. These basic functions can be enhanced by synchronous or asynchronous collaboration and knowledge management. The cross function area of life cycle management covers all activities for running an application in an enterprise system landscape. This includes the installation preparation of specific applications, customizing activities, monitoring
176
and maintenance, upgrade and migration tasks. The SAP product supporting these tasks is the SAP Solution Manager which provides part of these functions as a first step towards the implementation of ITIL. As a SOA consists of many different small services, an integration layer for coordinating the steps within a business process is needed. This layer is the Composite Application Framework (CAF) which allows the development of applications based on the underlying services - web services or SAP function modules provided by SAP applications. The implementation of a persistence layer as well as the design of user interfaces are as well part of this development. In order to access backend systems, persistence and user interfaces, the appropriate Java source code can be generated. This code is running within the SAP NetWeaver Portal providing the connectivity of a Web AS Java and the user interface of the portal framework. CAF can be used to have professional developers generate Java-based applications or to quickly implement an adhoc-workflow called guided procedure. By relying on other NetWeaver products, a new CAF application or a guided procedure can take advantage of existing functions like the connection to third party systems or user interface functions.
Application based view on netWeaver As we already pointed out, the component based view on the NetWeaver fridge is dominated by a technical aspect. For most scenarios more than one of these products is necessary to attain the goal. Thus the NetWeaver fridge has been cut into slices to show that the NetWeaver platform does not exist for its own sake, but to enable and support different IT practices (see Figure 5). The result of cutting the NetWeaver fridge into slices are 10 IT practices which shall be discussed now. Each of the practices itself is divided into IT scenarios describing concrete activities to
From ERP to Enterprise Service-Oriented Architecture
Figure 5. Shift from component based view to application based view on NetWeaver
achieve the target of a scenario, for example implementing single sign-on for an application or in a whole system landscape (Nicolescu, Funk et al., 2006). 1.
2.
3.
The first IT practice called user productivity enablement aims at improving the daily work of users by providing tools to exchange information between them, any by means of personalization for an easier access to data and business processes. This IT practice mainly builds upon the SAP NetWeaver Portal. The Practice of data unification tries to consolidate and harmonize data that is stored in different databases all over the enterprise. As this task is mainly connected in identifying redundancy and keeping data in different places synchronous, the vital application of this task is the Master Data Management (MDM). Business information management is an important support task for decision makers in a company. This practice tries to provide structured and unstructured information to make the right decision based on the given data. As this combines elements of Business Intelligence and Knowledge Management, BI and portal are the main components of this practice.
4.
5.
6.
Unlike the business information management that provides functions for proactive work with business data, the business event management works by a Push technology. Whenever important events happen in the company the relevant people will be informed and can react. Reacting on events is abstracted to the necessary data and entry to appropriate business transactions for a given situation. Main components of this practice are again SAP Business Intelligence and SAP NetWeaver Portal. End-to-end process integration allows the implementation and monitoring of business processes across system boundaries within a company’s own system landscape and with systems of business partners. This way, crucial processes can be monitored and adjusted centrally. The component providing these features is SAP Process Integration. Development and adaptation of enterprise applications represents the sixth IT practice which includes the creation of completely new software as well as changing and enhancing existing functions. Because development in an enterprise environment poses many challenges, it is supported by components like the Composite Application Framework.
177
From ERP to Enterprise Service-Oriented Architecture
7.
8.
9.
The IT Infrastructure Library (ITIL) plays an important role in today’s IT management, which is why the IT practice of unified life cycle management covers most of its concepts. An important building block for attaining this goal is the use of the SAP Solution Manager. The topic of application governance and security management is covered by the eighth IT practice. It deals with the holistic management of applications within a system landscape, addressing issues like communication and security of the systems. The Concept of implementing single sign-on is just one example. As this task affects all systems in an enterprise, there is no specific component to point out. While the presented IT practice of data unification aims at the consolidation of data, the IT practice of consolidation deals with all approaches to simplify the whole system landscape. Examples of these approaches are service oriented architecture or server virtualization.
Figure 6. Line 56 – ecosystem (56 o.D.)
178
10. The last IT practice enterprise service-oriented architecture design and deployment covers the areas of planning, developing, reusing und maintaining SOA applications. By separating the formerly monolithic applications into smaller services the administrative effort to keep track of these modules and to wisely reuse them is a major issue for development.
busIness Webs “Low-Cost Information and communication technologies, global markets, and global competition have forced enterprises to rethink their traditional, vertically integrated structure. As the cost of collaboration plummets, it increasingly makes sense for companies to focus on core competencies and partners for all other functions. As a result, corporations are transforming into flatter and more specialized enterprises, linking with suppliers and other businesses to form a larger, more open, value-creation entity” (Tapscott 2007) – the business web.
From ERP to Enterprise Service-Oriented Architecture
The innovation potential and practical relevance of the business web is a recent topic for ERP Software providers. SAP (Karch, Heilig et al., 2005; Tapscott 2007) and salesforce.com (Gilbert 2006) described it as an important shift in the future similar to the migration from the classical world wide web to Web 2.0. The business web can only be realized if an adequate ecosystem has been set up. The following figure illustrates such an ecosystem. In addition to the well know eCommerce or supply chain models this ecosystem not only focuses on special partners but instead on a whole community. The idea of integrating virtually everything is not a new one, but it often failed due to high complexity. Enterprise SOA now offers a concept and appendant tools that allow to manage complexity by partitioning and linking the emerging parts (partner processes) together.
conclusIon Companies try to save investments in old applications by wrapping old code into web services. This can be described as a three step approach: Legacy to SOA is the first step to go. The idea is to make your business transactions accessible, independent of the platform they run on and to abstract from the programming language they were implemented in. The second step is to map these services onto different business processes by focusing on your internal processes and your immediate business partners, i.e. customers and suppliers. The consequential third step is to extend the reach of your services – and along with it the reach of your business – by integrating them into a business web. This demonstrated the power of implementing (Enterprise) SOA. As a first benefit value is generated by reducing the effort for integrating your internal IT-Systems. Then additional value is generated by offering your direct partners access to your (Enterprise) SOA implementation. Fi-
nally value is generated by your community if they use your enterprises services within the business web to enable their own business processes.
reFerences Booth, D., H. Haas, et al., (2004). Web Services Architecture. W3C. Buck-Emden, R. (1998). Die Technologie des SAP-Systems R/3. München, Addison-Wesley. Carr, N. G. (2003). IT doesn‘t matter. Harvard Business Review, 81(5), 41–49. Davenport, T. H. (1998). Putting the Enterprise into the Enterprise System. Harvard Business Review, 76(4), 121–131. Erl, T. (2005). Service-Oriented Architecture Concepts, Technology, and Design. Upper Saddle River NJ: Prentice Hall. Gilbert, A. (2006). Salesforce CEO’s vision for ‘business Web’. www.news.com. DOI He, H. (2003). What Is Service-Oriented Architecture.www.xml.org Retrieved 14.06.2007, 2007, from http://www.xml.com/lpt/a/1292. Heinemann, F., & Rau, C. (2003). SAP Web Aplication Server. Bonn: Galileo Press. Karch, S., Heilig, L., et al. (2005). SAP NetWeaver. Bonn: Galileo Press. Krafzig, D., Blanke, K., et al. (2004). Enterprise SOA - Service-Oriented Architecture Best Practise. Upper Saddle River, NJ: Prentice Hall. 56L. (n/a.). SOA Ecosystem. from http://www2. sims.berkeley.edu/academics/courses/is290-4/ s02/readings/line56ecosystem.gif. Martin, R., & Mauterer, H. (2002). Systematisierung des Nutzens von ERP-Systemen in der Fertigungsindustrie. Wirtschaftsinformatik, 44(2), 109–116.
179
From ERP to Enterprise Service-Oriented Architecture
N. N. (2007). Enterprise resource planning. Retrieved 8.06.2007, 2007, from http://en.wikipedia. org/wiki/Enterprise_resource_planning. Nicolescu, V., Funk, B., et al. (2006). SAP Exchange Infrastructure for Developers. Bonn: Galileo Press. Tapscott, D. (2007) Rethinking Enterprise Boundaries: Business Webs in the IT Industry. NEWPARADIGM, DOI Vogel, A., & Kimbell, I. (2005). mySAP ERP For Dummies. Indianapolis: Wiley Publishing, Inc. Woods, D., & Word, J. (2004). SAP NetWeaver For Dummies. Wiley Publishing, Inc.
key terms And deFInItIons Business Web: Corporations are transforming into flatter and more specialized enterprises, linking with suppliers and other businesses to form a larger, more open, value-creation entity – the business web.
IT Practice: The application based view on SAP NetWeaver identifies different main use cases within an enterprise system landscape which are called IT Practices. In different abstraction levels, they describe the activities necessary to achieve the implementation of a specific technical task in a company. SAP R/3: SAP R/3 has been released 1992 as the first ERP system for very large businesses based on a client-server architecture. It was divided in different functional modules and was the central point of enterprise system landscapes. SAP NetWeaver: Moving from SAP R/3 to SAP ERP, the technical foundation of SAP R/3 SAP Basis was separated from the business functions and enhanced by many other technical features. This new technical basis that enables SOA is called NetWeaver. SOA: SOA is an architectural style whose goal is to achieve loose coupling among interacting software agents. A service is a unit of work done by a service provider to achieve desired end results for a service consumer. Both, provider and consumer are roles played by software agents on behalf of their owners.
This work was previously published in Handbook of Research on Enterprise Systems, edited by Jatinder N. D. Gupta, Sushil Sharma and Mohammad A. Rashid, pp. 316-328, copyright 2009 by Information Science Reference (an imprint of IGI Global).
180
181
Chapter 1.13
Data Reengineering of Legacy Systems Richard C. Millham Catholic University of Ghana, Ghana
IntroductIon Legacy systems, from a data-centric view, could be defined as old, business-critical, and standalone systems that have been built around legacy databases, such as IMS or CODASYL, or legacy database management systems, such as ISAM (Brodie & Stonebraker, 1995). Because of the huge scope of legacy systems in the business world (it is estimated that there are 100 billion lines of COBOL code alone for legacy business systems; Bianchi, 2000), data reengineering, along with its related step of program reengineering, of legacy systems and their data constitute a significant part of the software reengineering market. Data reengineering of legacy systems focuses on two parts. The first step involves recognizing the data structures and semantics followed by the
second step where the data are converted to the new or converted system. Usually, the second step involves substantial changes not only to the data structures but to the data values of the legacy data themselves (Aebi & Largo, 1994). Borstlap (2006), among others, has identified potential problems in retargeting legacy ISAM data files to a relational database. Aebi (1997), in addition to data transformation logic (converting sequential file data entities into their relational database equivalents), looks into, as well, data quality problems (such as duplicate data and incorrect data) that is often found with legacy data. Due to the fact that the database and the program manipulating the data in the database are so closely coupled, any data reengineering must address the modifications to the program’s data access logic that the database reengineering involves (Hainaut, Chandelon, Tonneau, & Joris, 1993).
DOI: 10.4018/978-1-60566-242-8.ch005
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Data Reengineering of Legacy Systems
In this chapter, we will discuss some of the recent research into data reengineering, in particular the transformation of data, usually legacy data from a sequential file system, to a different type of database system, a relational database. This article outlines the various methods used in data reengineering to transform a legacy database (both its structure and data values), usually stored as sequential files, into a relational database structure. In addition, methods are outlined to transform the program logic that accesses this database to access it in a relational way using WSL (wide spectrum language, a formal language notation for software) as the program’s intermediate representation.
relAted Work In this section, we briefly describe the various approaches that various researchers have proposed and undertaken in the reengineering of legacy data. Tilley and Smith (1995) discuss the reverse engineering of legacy systems from various approaches: software, system, managerial, evolution, and maintenance. Because any data reengineering should address the subsequent modifications to the program that the program’s data access’ logic entails, Hainaut et al. (1993) have proposed a method to transform this data access logic, in the form of COBOL read statements, into their corresponding SQL relational database equivalents. Hainaut et al. (1993) identify two forms of database conversion strategies. One strategy (physical conversion) is the physical conversion of the database where each construct of the source database is translated into the closest corresponding construct of the target database without any consideration of the semantic meaning of the data being translated. One of the problems with this strategy is that the resulting target database produced is of very low quality. The second strategy (conceptual conversion) is the recovery of precise semantic information, the conceptual
182
schema, of the source database through various reverse engineering techniques, and then the development of the target database, using this conceptual schema, using standard database development techniques. This strategy produces a higher quality database with full documentation as to the semantic meaning of the legacy data, but this approach is more expensive in terms of time and effort that it entails (Hainaut et al., 1993a). Hainaut et al.’s approach first uses the physical conversion strategy to convert data and then uses a trace of the program, which accesses the legacy data, in order to determine how the data are used and managed. In this way, additional structures and constraints are identified through the procedural code. Through an analysis of the application’s variable dependency graph and of the record and file definitions, data fields are refined, foreign keys are determined, and constraints on multivalued fields are discovered. During the database conceptualization phase, the application’s physical constructs of indexes and files are removed and the program’s objects of arrays, data types, fields, and foreign keys are transformed into their database equivalents (Hainaut et al., 1993a). Initially, database reengineering focused on recognizing the legacy database structure and transforming these structures into a new model (Aiken & Muntz, 1993; Joris, 1992; Pomerlani & Blaha, 1993; Sabanis & Stevenson, 1992). The values of legacy data were used solely to identify the legacy system’s dependencies in terms of keys between records (Pomerlani & Blaha). Aebi and Largo (1994), in the transformation of database structures, recognize that the transformation of structural schemas involves many issues. The first issue is that the different attributes and entities of the old system must be mapped to the new schema of the transformed database. Constraints, during the migration from the old to the new system, may be added, dropped, or changed. Entity sets in the new system may be identified by new attributes or by old attributes with changed domains or data types.
Data Reengineering of Legacy Systems
Wu et al. (1997), with their “butterfly” approach, assume that the legacy data are the most important part of the legacy system and it is the schema rather than the values of this legacy data that are the most crucial. This legacy data are modified in successive iterations with the legacy data being frozen and used for reading purposes only. The “Chicken Little” strategy allows the legacy system to interact with the target system during migration, using a gateway to serve as a mediator. This gateway is used to translate and redirect calls from the legacy system to the target database system, and then the gateway translates the results of the target database for use by the legacy system and by the legacy database. Although the legacy system is allowed to interact with the target database during migration, each data access involves two database accesses: one to the target database and another to the legacy database (Bisbal, Lawless, Wu, & Grimson, 1999). Bianchi, Caivano, and Visaggio (2000) proposed a different method of data reengineering where the data structures are reengineered rather than simply migrated. This reengineering involves the use of several steps. The first step is analyzing the legacy data through monitoring of calls to the legacy database from the legacy system (dynamic trace). This dynamic trace is used to identify which data could be classified as conceptual (data specific to the application domain and that describe specific application concepts), control (data that are used for program decisions or to record an event), structural (data that are used to organize and support the data structures of the system), or calculated (data that are calculated by the application). The second step involves redesigning the legacy database after identifying the dependencies among the data. This dependency diagram is then converted to a target database schema. After the legacy data are migrated to the new target schema, the legacy system is then altered such that data accesses to the target database reflect the new database schema (Bianchi et al.). Using
dynamic traces along with data flow and dependency analyses are some successful techniques used in data reverse engineering that have been obtained from the program understanding phase of program reverse engineering (Cleve, Henrard, & Hainaut, 2006; Millham, 2005). Aebi and Largo (1994) also recognize problems with the transformation of database values. Some problems include duplicate data being used in the same context, changes in the primary keys of tables often entailing changes in foreign keys of the same table, values of the attribute possibly exceeding its given range, different encoding schemes for data values may be used, recording errors incorporated into the existing data, and no existing distinction between unknown and null values. Hainaut et al. (1993b) outline some practical schema-relational transformations, such as project-join, extension transformation, and identifier substitution, that are dependent on the file manager such as CODASYL, relational, TOTAL/IMAGE, and IMS DBMS in order to function. For other databases, such as COBOL file structure databases, such transformations are impractical. In the case of COBOL file structure databases, multivalued attributes can be represented by list attributes only. Referential constraints upon files can be detected through careful analysis of the COBOL procedural code, file contents, and secondary keys. Identifying one-to-many relationships in files is accomplished through depicting multivalued, compound attributes B of A as a many-to-one relationship of B to A. Multirecord types within a sequential file may be portrayed as a many-toone relationship. Multivalued attributes, such as those used in foreign keys, are represented in the relational target model as rows in a separate table with a link to the main referencing table. If the source database is of a network type, a recursive relational type may be represented by an entity type and two one-to-many or one-to-one relational types (Hainaut et al., 1993b).
183
Data Reengineering of Legacy Systems
mAIn Focus: dAtA reenGIneerInG WItHIn ProGrAm trAnsFormAtIon Because the semantic understanding of the data to be reengineered depends to a large degree upon the program accessing this data and because the program accessing the target database needs to be modified in order to account for reengineered data, one of the areas of focus must be the analysis of the underlying program and its usage of the data (Bianchi et al., 2000; Hainaut et al., 1993b). One of the problems with analyzing a program for data access usage of legacy data is determining exactly when and where this legacy data are accessed. Design pattern analysis of code has been proposed as a method of determining what sections of code might be used for accessing this legacy data (Jarazabek & Hitz, 1998). Another method is to convert the programminglanguage-specific code into a programminglanguage-independent intermediate representation. This intermediate representation can be in a formal notation. Formal notations have been used to specify the transformation of both programs and their associated data mathematically. One advantage of these transformations are that these transformations are generic (programminglanguage independent) such that a transformation for a construct accessing a legacy database to a construct accessing the target database will be the same regardless of the types of legacy and target databases. For example, the formal notation WSL (Ward, 1992), with a proven set of program transformations, has been used to transform a procedurally structured and driven COBOL legacy system using a WSL intermediate program representation into an object-oriented, event-driven system (Millham, 2005). Although WSL has been extended to represent data structures, such as records (Kwiatkowski & Puchalski, 1998), little attention has been paid to transforming the data structures as represented in WSL as their original sequential file form into
184
another database format. However, in Millham (2005), the sequential records and their accesses by the application have been specified in terms of sets and set operations. Codd, in his relational database model, has specified this model in terms of sets and relational calculus; WSL, in turn, may be specified in terms of sequences, sets, and set operations (Ward, 1992). Consequently, a method to integrate the relational database model of Codd, in its formal notation involving database specification and operation, and WSL, in its formal notation involving program transformations and sequential file structures, could be accomplished. Other formal notations, similar to WSL, with the same representational capacity, could be utilized in the same manner. In this article, a method to transform a WSL-represented hierarchical database, with its underlying program logic, into its relational database equivalent is provided. This method, a combination of the data reverse engineering techniques of Hainaut et al. (1993) and Bianchi et al. (2000), occurs in conjunction with the program understanding phase of the WSL program reverse engineering process. During this program understanding phase, a static and dynamic analysis of the program code is undertaken. This analysis produces a dependency graph of variables and identifies which variables serve as control, structural, conceptual, and calculated field variables. While WSL, in its representation, does not distinguish between programs and file record types, the source code to WSL translation process keeps track of which records are of type file record. Similarly, WSL is type-less but, during the source code to WSL representation, the original data types of the source code are recorded for later use during the data reengineering process (these processes are outlined in Figure 1). In Figure 2, the data types of each record field, along with their default values, are recorded in a database for future use during the data reengineering process. Hence, it is possible to derive the program’s file and record types, along with the foreign and primary keys, from the information
Data Reengineering of Legacy Systems
Figure 1. Flow of reengineering processes and data store
obtained during these translation and analysis processes. If one, A, has an array field that forms a dependency in another record B, this dependency is depicted in a one-to-many relationship between records A and B. Anti-aliasing of records and their fields reduces the number of records that refer to the same file but use different names. Calculated data may appear as calculated fields in the database schema. Structural data are used to represent the target database schema. Record fields that are used as control data may indicate a relationship, involving the control data, between the record that defines this control data and any record(s) enclosed within the control block that is governed by that control data (Millham, 2005). Constraints on data may be determined through an analysis of their declaration within a program or through the use of Hainault et al.’s (1993b) methods. Because this data reengineering process occurs in conjunction with program reengineering, any changes to the underlying database structure can easily be propagated to the program reengineering phase where the program data access logic will be altered to reflect these database changes. Through this data reengineering, a more
logical database structure is derived along with the necessary subsequent program data access transformations. These transformations are based on a formal, platform-independent representation (WSL) such that the source system, whether COBOL or assembly language, is not important and the advantages of a formal notation, such as preciseness, are achieved. WSL equivalentWong and Sun (2006) have been working on detecting data dependencies in programs using a hybrid UML (unified modeling language) collaboration and activity diagram that is expressed in a platform-independent XML (extensible markup language) markup graph.
Future trends Because up to 50% of a legacy system’s maintenance costs, in terms of program and database changes, can be attributed to changing business rules, there is a strong need to adopt new techniques in reengineering legacy data and their associated applications. Jarazabek and Hitz (1998) propose the use of domain analysis and the use of generic
Figure 2. A sample COBOL record and its WSL equivalent
185
Data Reengineering of Legacy Systems
architectural design techniques in reengineering as a method of substantially reducing these maintenance costs. Atkinson, Bailes, Chapman, Chilvers, and Peake’s (1998) preferred approach is to develop interfaces to general persistent data repositories in the direction of generic reengineering environment design. From these interfaces, binding between the data stores, represented in the target databases, and the underlying application, which will access this target database, can be made. Another method is to use formal-language intermediate representations that can represent both the database and program along with a set of corresponding transformations to transform the legacy to the target database and transform the program accessing the legacy database to a program capable of accessing the new target database without errors. Through the use of formal-language intermediate representations, a set of generic reengineering tools for both the legacy database and its underlying application(s) could be designed.
conclusIon Data reengineering has gone beyond a simple physical conversion of a legacy database to its target database. Both Hainaut et al. (1993a) and Wu et al. (1997) utilize an analysis of the underlying application(s) accessing the legacy database in order to determine the database’s underlying data types and foreign keys. Bianchi et al. (2000) go further in using an analysis of the underlying application(s) accessing the legacy database in order to identify the categories of the data being used in the application such as whether the data is used as conceptual, structural, control, or calculated data and in order to identify the dependencies among them. In this manner, a better determination of the data usage can be made and a better target database schema can be derived from analysis of this usage.
186
Because the underlying application(s) that access the legacy database and the legacy database are so intrinsically linked during the data reengineering phase, there is a need to be able to analyze the applications, in a programming-languageindependent way, for data usage and then transform these applications’ data accesses and the legacy database, using a set of generic transformations and tools, to the target database. Formal notations (with their programming-language independence, sets of transformations, and basis in relational database theory and software reengineering) have been proposed as a means to generically analyze the application(s), once in the formal notation’s intermediate representation, for data usage and then transform this analysis into an accurate and meaningful target database schema for database reengineering.
reFerences Aebi, D. (1997). Data engineering: A case study. In C. J. Risjbergen (Ed.), Proceedings in advances in databases and information systems. Berlin, Germany: Springer Verlag. Aebi, D., & Largo, R. (1994). Methods and tools for data value re-engineering. In Lecture notes in computer science: Vol. 819. International Conference on Applications of Databases (pp. 1-9). Berlin, Germany: Springer-Verlag. Aiken, P., & Muntz, A. (1993). A framework for reverse engineering DoD legacy information systems. WCRE. Atkinson, S., Bailes, P. A., Chapman, M., Chilvers, M., & Peake, I. (1998). A re-engineering evaluation of software refinery: Architecture, process and technology.
Data Reengineering of Legacy Systems
Behm, A., Geppert, A., & Diettrich, K. R. (1997). On the migration of relational schemas and data to object-oriented database systems. Proceedings of Re-Technologies in Information Systems, Klagenfurt, Austria.
Jarazabek, S., & Hitz, M. (1998). Businessoriented component-based software development and evolution. DEXXA Workshop.
Bianchi, A., Caivano, D., & Visaggio, G. (2000). Method and process for iterative reengineering of data in a legacy system. WCRE. Washington, DC.
Jeusfeld, M. A., & Johnen, U. A. (1994). An executable meta model for reengineering of database schemas. Proceedings of Conference on the Entity-Relationship Approach, Manchester, England.
Bisbal, J., Lawless, D., Wu, B., & Grimson, J. (1999). Legacy information systems: Issues and directions. IEEE Software, 16(5), 103–111. doi:10.1109/52.795108
Joris, M. (1992). Phenix: Methods and tools for database reverse engineering. Proceedings 5th International Conference on Software Engineering and Applications.
Bohm, C., & Jacopini, G. (1966). Flow diagrams, Turing machines, and languages with only two formation rules. CACM, 9(5), 266.
Kwiatkowski, J., & Puchalski, I. (1998). Preprocessing COBOL programs for reverse engineering in a software maintenance tool. COTSR.
Borstlap, G. (2006). Understanding the technical barriers of retargeting ISAM to RDBMS. Retrieved from http://www.anubex.com/ anugenio!technicalbarriers1.asp
Mehoudj, K., & Ou-Halima, M. (1995). Migrating data-oriented applications to a relational database management system. Proceedings of the Third International Workshop on Advances in Databases and Object-Oriented Databases (pp. 102-108).
Brodie, M. L., & Stonebraker, M. (1995). Migrating legacy systems: Gateways, interfaces, and the incremental approach. Morgan Kaufmann. Cleve, A., Henrard, J., & Hainaut, J.-L. (2006). Data reverse engineering using system dependency graphs. WCRE. Hainaut, J.-L., Chandelon, M., Tonneau, C., & Joris, M. (1993a). Contribution to a theory of database reverse engineering. WCRE. Baltimore, MD. Hainaut, J.-L., Chandelon, M., Tonneau, C., & Joris, M. (1993b). Transformation-based database reverse engineering. Proceedings of the 12th International Conference on Entity-Relationship Approach (pp. 1-12). Janke, J.-H., & Wadsack, J. P. (1999). Varlet: Human-centered tool for database reengineering. WCRE.
Millham, R. (2005). Evolution of batch-oriented COBOL systems into object-oriented systems through the unified modelling language. Unpublished doctoral dissertation, De Montfort University, Leicester, England. Pomerlani, W. J., & Blaha, M. R. (1993). An approach for reverse engineering of relational databases. WCRE. Rob, P., & Coronel, C. (2002). Database systems: Design, implementation, and management. Boston: Thomas Learning. Sabanis, N., & Stevenson, N. (1992). Tools and techniques for data remodeling COBOL applications. Proceedings 5th International Conference on Software Engineering and Applications. Tilley, S. R., & Smith, D. B. (1995). Perspectives on legacy system reengineering (Tech. Rep.). Carnegie Mellon University, Software Engineering Institute.
187
Data Reengineering of Legacy Systems
Ward, M. (1992). The syntax and semantics of the wide spectrum language (Tech. Rep.). England: Durham University. Weiderhold, G. (1995). Modelling and system maintenance. Proceedings of the International Conference on Object-Orientation and EntityRelationship Modelling. Wong, K., & Sun, D. (2006). On evaluating the layout of UML diagrams for program comprehension. Software Quality Journal, 14(3), 233–259. doi:10.1007/s11219-006-9218-2 Wu, B., Lawless, D., Bisbal, J., Richardson, R., Grimson, J., Wade, V., et al. (1997). The butterfly methodology: A gateway-free approach for migrating legacy information system. In ICECOS (pp. 200-205). Los Alamos, CA: IEEE Computer Society Press. Zhou, Y., & Kontogiannis, K. (2003). Incremental transformation of procedural systems to objectoriented platform. Proceedings of COMPSAC, Dallas, TX.
key terms And deFInItIons Butterfly Approach: An iterative data reengineering approach where the legacy data are frozen for read-only access until the data transformation process to the target database is complete. This approach assumes that the legacy data are the most important part of the reengineering process and focuses on the legacy data structure, rather than its values, during its migration. Chicken Little Approach: An approach that allows the coexistence of the legacy and target
databases during the data reengineering phase through the use of a gateway that translates data access requests from the legacy system for use by the target database system and then translates the result(s) from the target database for use by the legacy system. Conceptual Conversion Strategy: A strategy that focuses first on the recovery of the precise semantic meaning of data in the source database and then the development of the target database using the conceptual schema derived from the recovered semantic meaning of data through standard database development techniques. Domain Analysis: A technique that identifies commonalties and differences across programs and data. Domain analysis is used to identify design patterns in software and data. Legacy Data: Historical data that are used by a legacy system that could be defined as a long-term mission-critical system that performs important business functions and contains comprehensive business knowledge Multivalued Attribute: When an attribute, or field, of a table or file may have multiple values. For example, in a COBOL sequential file, its corresponding record may have a field, A, with several allowable values (Y, N, D). Translating this multivalued attribute to its relational database equivalent model is difficult; hence, lists or linked tables containing the possible values of this attribute are used in order to represent it in the relational model. Physical Conversion Strategy: A strategy that does not consider the semantic meaning of the data but simply converts the existing legacy constructs of the source database to the closest corresponding construct of the target database.
This work was previously published in Handbook of Research on Innovations in Database Technologies and Applications: Current and Future Trends, edited by Viviana E. Ferraggine, Jorge Horacio Doorn and Laura C. Rivero, pp. 37-44 , copyright 2009 by Information Science Reference (an imprint of IGI Global).
188
189
Chapter 1.14
Semantically Modeled Databases in Integrated Enterprise Information Systems Cheryl L. Dunn Grand Valley State University, USA Gregory J. Gerard Florida State University, USA Severin V. Grabski Michigan State University, USA
IntroductIon Semantically modeled databases require their component objects to correspond closely to real world phenomena and preclude the use of artifacts as system primitives (Dunn and McCarthy, 1997). Enterprise information systems (also known as enterprise resource planning systems) based on semantically modeled databases allow for full integration of all system components and facilitate the flexible use of information by decision-makers. Researchers have advocated semantically designed information systems because they provide benefits to individual decision-makers (Dunn and Grabski, 1998, 2000), they facilitate organizational productivity and inter-organizational communication (Cherrington et al., 1996; David, 1995;
Geerts and McCarthy, 2002), and they allow the database to evolve as the enterprise does through time (Abrial, 1974). Organizations have implemented enterprise resource planning (ERP) systems in an attempt to improve information integration. Much of the value of these ERP systems is in the integrated database and associated data warehouse that is implemented. Unfortunately, a significant portion of the value is lost if the database is not a semantic representation of the organization. This value is lost because the semantic expressiveness is insufficient -- relevant information needed to reflect the underlying reality of the organization’s activities is either not stored in the system at all, or it is stored in such a way that the underlying reality is hidden or disguised and therefore cannot be interpreted.
DOI: 10.4018/978-1-60566-242-8.ch026
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Semantically Modeled Databases in Integrated Enterprise Information Systems
Partly as a result of systems lacking expressive semantics, researchers have been developing ontologies. Gruber (2008) provides a useful definition of ontology: “In the context of database systems, ontology can be viewed as a level of abstraction of data models, analogous to hierarchical and relational models, but intended for modeling knowledge about individuals, their attributes, and their relationships to other individuals. Ontologies are typically specified in languages that allow abstraction away from data structures and implementation strategies; in practice, the languages of ontologies are closer in expressive power to first-order logic than languages used to model databases. For this reason, ontologies are said to be at the “semantic” level, whereas database schema are models of data at the “logical” or “physical” level. Due to their independence from lower level data models, ontologies are used for integrating heterogeneous databases, enabling interoperability among disparate systems, and specifying interfaces to independent, knowledgebased services.” We base our discussion in this paper on the Resources-Events-Agents (REA) ontology (McCarthy, 1982; Geerts and McCarthy 1999; 2000; 2004; 2001; 2002; Haugen and McCarthy, 2000) which is considered an enterprise ontology or a business domain ontology. Ontologically-based information systems with common semantics are regarded as a necessity to facilitate inter-organizational information systems (Geerts and McCarthy, 2002). Presently, most inter-organizational data is sent via EDI (which requires very strict specifications as to how the data are sequenced and requires some investment by adopting organizations). The same requirement holds true for web-based systems. There is no or very limited knowledge inherent in those systems. Alternatively, if trading partners implement systems based on the same
190
underlying semantic model, many of the current problems can be eliminated. This chapter first presents a normative semantic model for enterprise information systems that has its roots in transaction processing information systems. We use this model because the majority of information processed and tracked by information systems is transactional in nature. We review empirical research on semantically modeled information systems and then provide an example company’s semantic model as a proof of concept. We next discuss how this model can be applied to ERP systems and to inter-organizational systems and present future trends and research directions, and provide concluding comments.
semantic model development In this chapter, we adopt a definition of an enterprise information system that is based on David et al.’s (1999) definition of an accounting information system: an enterprise information system that captures, stores, manipulates, and presents data about an organization’s value-adding activities to aid decision-makers in planning, monitoring, and controlling the organization. This definition is also consistent with much of the research on ERP systems. We recommend that the REA ontology (REA semantic model) (McCarthy, 1982) be used as the core foundation of enterprise information systems due to the model’s robust and general nature. The semantics of the REA model are designed to capture the essential features of value added activities – activities that correspond to exchanges of resources (e.g., giving inventory and receiving cash) and transformations of resources (converting raw materials into finished goods). The basic REA model is presented in Figure 1 using entity-relationship notation (Batini et al., 1992), however, it has also been implemented in NIAM (Geerts and McCarthy, 1991) and in object notation (Nakamura and Johnson, 1998). The general REA model for any particular transaction cycle consists
Semantically Modeled Databases in Integrated Enterprise Information Systems
Figure 1. The original REA model (McCarthy, 1982)
of the following components (these components are presented in list form for the sake of brevity). Readers are encouraged to read McCarthy (1982) and Dunn et al. (2005) for more detail. •
•
•
•
•
Two economic events that represent alternative sides of an economic exchange (one increment event and one decrement event). Two resources that represent what is received and given up in the economic exchange. Two internal agents that represent the company’s personnel responsible for the economic events (usually one agent type for each event). One external agent that represents the person or company with whom the company is engaging “at arms’ length” in the exchange. Duality relationship between the increment and decrement economic events.
•
• •
Stock-flow relationships between the events and the associated resources, representing the inflows or outflows of the resources resulting from the events. Responsibility relationships between the events and the internal agents. Participation relationships between the events and the external agents.
The general model outlined above can be modified to accommodate specific needs. The original model was described at only the operational level, which is sufficient for historical accounting purposes. The REA model has now been extended to an inter-organizational database systems ontology that incorporates both the operational (what has happened) and knowledge levels (what should happen as a policy or expectation) and includes the following additional features and components, again presented in list form (Geerts and McCarthy, 2002).
191
Semantically Modeled Databases in Integrated Enterprise Information Systems
•
•
•
• • •
•
•
• •
•
•
•
192
Integration of process (cycle) level models into a value-chain level model at a higher level of abstraction. Expansion of process (cycle) level models into workflow or task level models at lower levels of abstraction. Separation of components into continuants (enduring objects with stable attributes that allow them to be recognized on different occasions throughout a period of time) and occurrents (processes or events that are in a state of flux). Type images that represent category level abstractions of similar components. Commitment images that represent agreements to engage in future economic events. Assignment relationships between agents that represent the designation of an agent category to work with another agent category (e.g. salesperson assigned to customer). Custody relationships between agents and resources that represent the agents that are accountable for various resources. CommitsTo relationships between commitment images and the resulting economic events. Partner relationships between commitment images and the participating agents. Reserved relationships between commitment images and the resources that are the proposed subject of the future exchange. Typification description relationships between continuant components and the categories to which they belong, e.g. resourceresource type relationships. Characterization description relationships between continuant type images, e.g. agent type-agent type relationships and agent type-resource type relationships. Typification history relationships between physical occurrents and their types, indicating that the occurrents share the same script, e.g. event-event type.
•
•
•
•
•
• •
Scenario history relationships between abstract occurrents and other abstractions, e.g. event type-resource type. Business Process as a description of the interaction between resources, agents, and dual events. Partnering as the purpose of the interaction between resources, agents, and commitments. Segmentation as a description of the grouping of physical categories into abstract continuant categories. Policy or Standard as a description of the expression of knowledge level rules between abstract types (e.g. scripts and scenarios). Plan as a description of the application of a script to physical occurrents. Strategy as a description of rules for the execution of a Business Process or Partnering.
The semantics contained within the expanded REA model facilitate the information exchange between trading partners and likely provide a needed core foundation for ERP systems. For example, research reported by Andros et al. (1992) and Cherrington et al. (1996) found that IBM was able to obtain significant benefits from a semantically modeled system based on the REA model. Reported benefits of the new system included a significant reduction in the time to process employee reimbursements, significant cost reductions, and a generally high level of employee satisfaction. These studies demonstrate how the semantic models were used as the basis for systems design, and then how the resultant systems were perceived by the end-users, thereby completing the research loop from design to end-user. Weber (1986) empirically evaluated the REA semantic model. His objective was to determine whether software practitioners had both identified and solved the same problems as identified by academicians. He found that the REA model fulfilled
Semantically Modeled Databases in Integrated Enterprise Information Systems
its objective as a generalized model and that it was a good predictor of the high-level semantics found in all twelve packages. He believed that the REA model excluded certain types of events such as contracts and suggested extensions (many of which have been subsequently incorporated into the model). Weber reported that important differences in the low-level semantics existed across all packages, and that these seem to reflect the relative complexity of the different packages. Nonetheless, the clustering of the low-level semantics of the packages according to the REA model allowed for the identification of similarities and differences among the packages, and the likely strengths and limitations of the package become apparent. On a datalogical level, Weber found few violations of normal form given the number of fields in the packages. He concluded that theory was a good predictor of design practice and that the empirical evidence supports normalization theory. David (1995) developed a metric to classify organizations’ accounting systems characteristics (ASC) along a continuum between traditional general ledger based accounting systems and REA systems. The ASC metric was developed based upon characteristics that were identified in theoretical research as critical characteristics for REA systems. David visited companies in the pulp and paper industry and conducted structured interviews and used the ASC metric to determine each system’s position on the continuum. Data was also gathered as to the companies’ productivity, efficiency, and the company executives’ perceptions of competitive advantage. REA-like systems were found to be associated with productivity and administrative efficiencies. O’Leary (2004) compared the REA semantic model with an enterprise resource software package. He compared information about SAP from various sources and determined that SAP is consistent with the REA model in its database, semantic and structure orientations. However, SAP was also found to contain implementation compromises in the structuring and semantic ori-
entation, based in part on accounting artifacts. Part of the compromises are likely due to the fact that software vendors’ products are constantly evolving in an incremental fashion, and vendors never do a software redesign starting with a clean slate. Research has demonstrated that organizations are able to obtain significant benefits from a semantically modeled system based on the REA framework (Andros et al., 1992; Cherrington et al., 1996). However, even semantically modeled systems are not sufficient to ensure success. Rather, success is dependent upon a variety of factors including top management support and corporate philosophy in addition to the benefits inherent in the semantically modeled system (Satoshi, 1999). Research has also demonstrated that the REA model is sufficient as a generalized model (Weber, 1986; O’Leary, 2004), and the more REA-like a system is, the more it is associated with productivity and administrative efficiencies (David, 1995). We next present an REA model of a prototypical retail firm. This example will serve as further proof of concept and allow the reader to verify that the semantic data model accurately represents the reality of the described organization, and that there is an application of the REA model with consistent patterns. First, we will provide a narrative for the firm and describe in detail the sales cycle. Then we will describe the components of a REA model (in entity-relationship notation using guidelines set forth by Batini et al., 1992) that captures the reality. Finally, we will discuss the benefits of the REA model that occur independent of the entity-relationship notation. Our example is for a company called Robert Scott Woodwinds (RSW), a retail organization that sells, leases and repairs woodwind musical instruments, and sells music supplies. This organizational setting encompasses many of the activities performed by most business organizations. While the semantics of the model are specific to RSW’s environment, the model provides a generic template for a retail organization’s semantic model, it also provides
193
Semantically Modeled Databases in Integrated Enterprise Information Systems
a template for leasing organizations, and it has significant commonalities with manufacturing firms in the repair aspects of the business.
REA Semantic Model Example For the scenario presented below, the resources, events, and agents are captured in REA diagrams (in entity-relationship form) in Figures 2 through 5. RSW sells, repairs, and leases musical woodwind instruments from a single store. The business caters to professional musicians, casual musicians, and children who are involved in school music programs. While RSW clearly needs to have a web presence, it anticipates only a minor portion of its revenues from web sales (primarily for consumables such as reeds and sheet music). Professional musicians want to play and get a feel for the particular instrument that they are interested in purchasing and will always come into the store. Further, these musicians are very particular about who will repair their instrument. For repairs, the instrument must be brought or sent to the store. School sales are a result of sales staff calling on the band directors at the various schools and gaining permission to be the supplier for that
school (usually this means that the children will rent from RSW, not that the school will purchase the instruments from RSW). Following is information about RSW’s revenue (including sales, sales returns, repairs, and leases) cycle.
Revenue Cycle RSW generates revenue in three ways. The first is through sales of instruments; the second is through rental of instruments; and the third is through the repair of instruments. In the retail instrument business, renting instruments so that customers can “try before they buy” is a great way of generating sales. This is particularly true in the school marketplace. Thus, RSW sends salespeople (who also have music teacher training) into the schools to host “music talent exploration” sessions for the children. When a child has been identified as having talent for (and some interest in) a specific instrument, the parents will be offered the opportunity to rent that instrument for 3 months. At the end of the 3-month trial period, the parents may purchase that instrument (or a different instrument) and the sale price they pay will reflect a discount equal to the amount of rent paid. When
Figure 2. RSW sales revenue expanded REA data model
194
Semantically Modeled Databases in Integrated Enterprise Information Systems
Figure 3. RSW rental revenue cycle expanded REA data model
Figure 4. RSW sales returns expanded REA data model
an instrument is rented out, it is transferred from “Inventory for Sale” to “Inventory for Rent”. If it is later sold, the “Inventory for Rent” category will be decreased. RSW’s salespeople complete sales invoices and rental invoices for customers. The customer may purchase (rent) multiple instruments on a single sales (rental) invoice. Instruments are either
delivered to the customer at the school during the salesperson’s next sales call, or the customer may pick up the instruments at the store. Cash receipts are processed by cashiers and are deposited into the company’s main bank account each day. The semantic data models for sales and for leases are presented in Figures 2 and 3 respectively.
195
Semantically Modeled Databases in Integrated Enterprise Information Systems
Figure 5. RSW repair service revenue expanded REA data model
As can be seen in the sales revenue data model (Figure 2), the original REA template has been applied. The resource (inventory) is associated with an economic decrement event (sale) via a stockflow relationship. The sale event is associated with the internal agent (salesperson) and the external agent (customer) via control relationships. There is a duality relationship between the economic decrement event (sales) and the economic increment event (cash receipt). Cash receipt is associated with the resource (cash) via a stockflow relationship and to the internal agent (cashier) and the external agent (customer) via control relationships. Note that the cardinalities between sale and cash receipt disclose that a sale does not require the immediate receipt of cash. Rather, a sale may be paid for at one later point in time, in installments over a period of time, or not paid for at all (because of a sale return or bad debt). Newer aspects of the REA ontology have also been included in this model. The sales order event is a commitment image and is related to the resulting economic event (sales) via a CommitsTo relationship. It is also related to salesperson via a Control relationship, to customer via a Partnering relationship, and to inventory via a Reserved relationship. In addition, there is an Assignment
196
relationship established between salesperson and customer. Sales personnel can be assigned to many different customers, and a customer has, at most, one sales person assigned to them. The REA ontology allows for the inclusion of additional entities that managers need information about for planning, controlling and evaluating individual and organizational performance. These additions do not change the underlying REA template; they simply add to it. For example, the sales call event that leads to the sales order event is modeled, and is linked to the associated resource and agents. Further, the external agent set has been expanded to include school and district. A customer does not need to be associated with a school, and a school may be listed even if no customers from that school have begun to trade with RSW. A school does not need to belong to a school district (i.e., it is a private school), however, if it belongs to a school district, it can only belong to one school district. A school district has at least one school. The information presented in the semantic model is a representation of the reality of the world of RSW. The cardinalities contain the “business rules” that are followed by the entities in their relationships. A similar analysis can be performed for the lease revenue data model (Figure 3), and
Semantically Modeled Databases in Integrated Enterprise Information Systems
in fact, the only differences between the two are the events of rental order and rental contract in place of sales order and sale. The observation of such commonalities allows for the development of common business practices for all revenue activities and facilitates reengineering at the task level. In a similar manner to the sales and leases, a semantic model of sales returns and lease returns can also be created. The sale return data model (Figure 4) has the economic decrement sale event associated with the economic increment event sale return, and the sale return is associated with the subsequent economic decrement event cash disbursement (the sale return might not result in a cash disbursement if no cash was received at the time of the sale; this is represented by the 0 min cardinality on the sale return). The rental returns data model is analogous (the difference is in the economic increment event of rental return which is associated with the rental contract, and this model is not presented in the interest of brevity) The third form of revenue generated by RSW is through instrument repair services. RSW offers a variety of repair service types. “Complete overhauls” are available for many instruments. Prices of these complete overhauls vary by instrument type. “Repad overhauls” are also available for each instrument type. These are less expensive than complete overhauls and price varies by instrument type. Customers may have individual keys replated at a fixed cost per key. Crack repairs will be judged as minor or major and will be priced accordingly. Other repairs are also available. The semantic model for repair revenue is presented in Figure 5. In the repair service REA model, more of the features of the newer REA ontology are introduced. Typification is used, with an event to event-type relationship between repair service and service type and also a commitment-commitment type relationship between repair service order and service type. The service type entity represents the details of the various repair service types RSW can provide (e.g. complete overhauls, repad,
etc.). This serves as the bill of materials for the service and allows storage of information about the “categories” to which the repair services belong. Another internal agent is also included, that of repair person. This allows RSW to track the person was responsible for a given repair. This is similar to any type of job shop or organization that must track employee time (e.g., consulting firms). For simplicity and in the interest of brevity, we do not present the acquisition, payroll or financing cycles. The acquisition model is similar to the revenue cycle, with cash disbursement being the decrement event, purchase the increment event and vendor is the external agent. The payroll cycle would be similar to the acquisition cycle except that services are acquired from employees (hence employees act as external agents) instead of from vendors or suppliers. The financing model is also similar to the acquisition process except that the resource acquired and consumed is cash. Sometimes the flow of resources may not be measurable, for example, in the payroll cycle the resource acquired is employee labor. This is an intangible resource that may be difficult, if not impossible to measure in terms of stockflow. The REA ontology requires that such things be considered and only left out of the model if they are impossible to implement. The REA ontology can be implemented using alternative notations, including object notation, UML, and others. We have chosen to represent it with entity-relationship notation because of its simplicity and widespread use. The benefit of the REA ontology is not found in the notation itself, but in the repeated application of the pattern to all of the various business cycles. Consistent use of the template gives users an ability to understand the reality being represented, and it also provides system reusability and extendibility. Dunn and McCarthy (2000) describe four benefits of the REA model. First, the standardized use and definition of information structures across organizational boundaries facilitates electronic commerce, by enabling the retrieval, sending, and integration of
197
Semantically Modeled Databases in Integrated Enterprise Information Systems
data among business partners or research sources. Second, intellectual complexity is made manageable by the use of natural primitives that abstract to generalized descriptions of structures which in turn cover many thousands of cases with as few exceptions as possible. Third, consistent use of the REA pattern can enable system-based reasoning and learning. Fourth, use of system primitives that closely mirror the underlying phenomena enables system adaptability. It is important to note that the REA ontology and other semantic modeling approaches have been studied at the individual decision-making level, both for system designers and users. Dunn and Grabski (2002) provided an extensive literature review of individual-level semantic models research. Overall conclusions of their review were as follows. Accounting systems based on the REA model are perceived as more semantically expressive by end users than are accounting systems based on the traditional debit-credit-account model. Also, accounting systems perceived as semantically expressive result in greater accuracy and satisfaction by end-users than do non-semantically expressive accounting systems (Dunn and Grabski, 2000). Conceptual modeling formalisms are superior to logical modeling formalisms for design accuracy (Sinha and Vessey, 1999; Kim and March, 1995). The ability to disembed the essential objects and relationships between the objects in complex surroundings depends on a cognitive personality trait, field independence and leads to more accurate conceptual model design (at least for undergraduate students) (Dunn and Grabski, 1998). The focus on increment and decrement resources and events along with the associated agents to those events is consistent with database designers’ thought processes. Additionally, knowledge structures consistent with the REA template’s structuring orientation are associated with more accurate conceptual accounting database design (controlling for knowledge content, ability, and experience level) (Gerard, 2005). The lack of mandatory properties with entities is not
198
critical (Bodart et al., 2001), perhaps because of the semantics inherent in the modeled system. System designers distinguish between entities and relationships, with entities being primary (Weber, 1996). Data and process methodologies are easier for novices than the object methodology, and resulted in less unresolved difficulties during problem-solving processes (Vessey and Conger, 1994), and there is a more pronounced effect for process-oriented tasks (Agarwal, Sinha, and Tanniru, 1996a). Experience in process modeling matters, regardless of whether the modeling tool (process versus object-oriented) is consistent with the experience (Agarwal, Sinha, and Tanniru, 1996b).
semantic models, erP systems, and Inter-organizational systems The American Production and Inventory Control Society (APICS, 1998) defined an ERP system as “an accounting-oriented information system for identifying and planning the enterprise-wide resources needed to take, make, ship, and account for orders.” David et al. (1999) proposed using REA as a basis for comparison among systems and ERP packages. This was based, in part, on Weber (1986) who reported that a comparison at the level of symbol sets made semantic similarities and differences apparent. Additionally, O’Leary (2004) compared SAP to the REA model and determined that SAP is REA-compliant; however, SAP has significant implementation compromises based on accounting artifacts. Consistent with David et al. and based upon the APICS definition of ERP systems, we believe that REA is a robust candidate to which ERP systems may be compared because of its strong semantic, microeconomic and accounting heritage. More importantly, we believe that semantic models must be used as a basis for the information system because of the information contained within the semantics. Watson and Schneider (1999) also emphasized the importance of ERP systems in providing the
Semantically Modeled Databases in Integrated Enterprise Information Systems
process-centered modeling perspective necessary for successful organizations. They emphasized understanding the underlying process model inherent in ERP systems. That is, the underlying semantics of the system. Further, the SAP R/3 business blueprint provides four enterprise views; process, information, function, and organization. It is in the organization view that the semantic relationships among organizational units are represented. As most ERP implementations will require some modifications to meet the specific business needs, the analysts, designers, and users need to understand the changes and their subsequent planned and unplanned impact. A semantically based model will facilitate this understanding. The REA ontology provides a high-level definition and categorization of business concepts and rules, enterprise logic, and accounting conventions of independent and related organizations (Geerts and McCarthy, 2002). The REA ontology includes three levels: the value chain level, the process level, and the task level. The value chain level models an enterprise’s “script” for doing business. That is, it identifies the high-level business processes or cycles (e.g. revenue, acquisition, conversion, financing, etc.) in the enterprise’s value chain and the resource flows between those processes. The process level model represents the semantic components of each business process. The example in section 2 depicted the process level of the REA ontology for RSW. The task (or workflow) level of the REA ontology is the most detailed level, and includes a breakdown of all steps necessary for the enterprise to accomplish the business events that were included at the process level. The task level can vary from company to company without affecting the integration of processes or inter-organizational systems; therefore we do not present an elaboration of this level (it is also the level at which most reengineering occurs). An illustration of the value chain level for RSW is presented in Figure 6. The value chain level describes how the business processes within the company fit together
and what resources flow between them. In RSW, financing is obtained and as a result, cash is distributed, as needed to the various acquisition processes. Some of that cash is used to acquire labor, which is then in turn distributed to each of the other business processes. Some of the cash is used to acquire instruments and supplies (both supplies for sale and also supplies used in the repair services). Some of the cash is used to acquire G&A services. The instruments, supplies, G&A services, and labor are received by the three revenue processes and are combined in various ways to generate revenue. The resulting cash is then distributed to the financing process where it is used to repay financing (including distribution of earnings to stockholders if this were a corporation) or to re-distribute cash to the other business processes. The value chain level demonstrates integration opportunities for the enterprise-wide information system. Resources that flow from one business process to another can be modeled in the database one time and accessed by the appropriate business processes. Then as the separate business processes have their events linked to those resources, the entire system is integrated and can be examined at a level of abstraction higher than the individual business process level. For inter-organizational system modeling, a fourth level could be added to the REA ontology – the value system level. This enables an integration that spans the conceptual models of each organization to recognize that many business activities and phenomena such as electronic commerce and supply chain management that involve multiple organizations. A company’s value system consists of its relationships with all other entities with which the company makes exchanges. For example, a company engages in exchanges with its employees, its suppliers, its customers, and its creditors. An enterprise system model at this level would represent the resource flows between the various types of organizations in the enterprise’s value system. For example, for RSW, the value system model is depicted in
199
Semantically Modeled Databases in Integrated Enterprise Information Systems
Figure 6. RSW value chain level model
Figure 7. Partnering organizations that build their enterprise databases on the REA ontology will be better able to integrate their databases. This will probably be best-accomplished using object technology and artificial intelligence concepts such as automated intensional reasoning (Geerts and McCarthy, 1999) and automated intelligent Figure 7. RSW value system level model
200
agents. Automated intensional reasoning systems make inferences based on database table intensions, and require completely consistent use of any underlying pattern such as REA. Although REA is being used by some consultants and software developers, as indicated in the literature review in section 2 (also see e.g., REA
Semantically Modeled Databases in Integrated Enterprise Information Systems
Technology at http://reatechnology.com/ and Workday at http://www.workday.com), there are various other approaches to integrating systems within and across organizations. Approaches to integrating systems within organizations have primarily been ERP software-driven and have also focused on document exchange. Wakayama et al. (1998) state that “rethinking the role of documents is central to (re)engineering enterprises in the context of information and process integration” because documents “are a common thread linking integration issues.” Electronic data interchange is certainly document-oriented, with standards developed for issues such as which field on an electronic purchase order transmitted between companies represents the quantity ordered. More recent advances for web-based transmission of documents, such as various markup languages, have also focused on identifying fields that exist on these documents. We believe it is not the documents themselves that provide the common thread among company’s business processes. Rather it is the similar nature of the underlying transactions and information needs associated with managing these transactions that are depicted within the organization’s semantic model. Although organizations can have very different “scripts” for doing business, the core of the scripts is often the same. Therefore, rather than taking a document-oriented approach to inter-organizational system integration, we believe it is more important to apply an enterprise ontology approach such as REA. Vendors of enterprise resource planning systems recognize the need for inter-organization integration and supply chain management. This integration issue has not been easily resolved. “Bolt-on” applications are typically used. It is possible that some of the difficulties in integrating systems across organizations with ERP systems are due to the requirement of many ERP packages that organizations conform to the software (in the form of “best practices”) at the task level. REA encour-
ages conformance to a core pattern at the business process level (or at least to a core set of pattern components), but allows considerable variation at the task level. We suggest that the integration of organizations’ information systems does not need to occur at the task level; it can occur at the value system and process levels where the core pattern components can easily be standardized (however, as noted above, the systems need to include task level specifications of homonyms and synonyms to facilitate integration). As Swagerman, Dogger, and Maatman (2000) note, standardization of patterns of behavior ensure the semantic context is clear to every user. We believe the standardization of patterns and pattern components along with the development of appropriate artificial intelligence tools will allow system integration without imposing formal structures on the social domain. Limited research has been conducted as to the similarities of the semantic models underlying the current ERP packages. Nonetheless, these models do exist and many organizations reengineer themselves to become consistent with the best practices embodied within these models. Unfortunately, this is often at the task level and the benefits of the underlying semantics are lost. This is very apparent when organizations seek to extend their value chains up and down their supply chain. If the underlying semantics were preserved, along with the standardization of semantic patterns, then automated intensional reasoning and other knowledge-based tools would be able to facilitate the inter-organizational trading. Semantically modeled enterprise information systems will provide many benefits, from the individual decision maker level to the inter-organizational level. The critical issue is to ensure that the semantics are not lost upon the implementation of the system and obscured by the task level mechanics. When this occurs, all subsequent benefits are lost and we are faced with the task of integrating disparate systems that are conceptually identical.
201
Semantically Modeled Databases in Integrated Enterprise Information Systems
Future trends In semAntIcAlly modeled systems Electronic commerce is commonplace; however the majority of transactions in business to business electronic commerce rely on the transmission of data based on rigidly structured electronic data interchange formats. We believe that the use of an ontological modeling approach such as the REA ontology discussed in the previous sections has the potential to enhance business to business electronic commerce. The issues related to supply chain management are similar to e-commerce. In fact, when examining inter-organizational systems from a supply chain perspective, the same set of issues apply as found in business-to-business e-commerce. Consequently, our recommendations and expectations are the same, and these have been presented in the prior sections. The use of an ontological system like the REA ontology is a necessary but not sufficient condition to facilitate effective and efficient interorganizational systems. Again, intelligent agents and automated intensional reasoning is also required for this to occur. Further, the semantics of the systems must not be obscured by subsequent implementation artifacts. There are many issues that still need to be resolved, and as such these present many research opportunities. One issue focuses on the scalability of the systems based on the REA ontology. The system presented in this chapter is for a relatively small organization. How this translates into a system for a large multinational firm needs to be explored. Also, while research has been conducted on automated intensional reasoning (Rockwell and McCarthy, 1999), much more is needed. Further, this needs to be extended to the use of intelligent agents and to object-based environments. Another issue is that of preserving the semantics at an operational level, beyond the level of the database itself. This would allow decisionmakers additional insight into the problems and the information available to address the issues that
202
they face. Again, object-based systems seem to provide the most benefit, but additional research is needed. Regardless of what the research demonstrates, organizations will need to be convinced that they should change the ERP systems that they have acquired. These systems required an immense investment, and in some cases, these systems are still not functioning in an acceptable manner. It is most likely that change will need to be driven by the ERP vendors themselves. They would have a vested interest in selling upgrades to their systems as long as they can demonstrate some type of advantage for the consumer. This has occurred with a great deal of regularity in the PC market. A significant number of organizations would need to make the change in order for the benefits discussed in this chapter to occur. The first fax machine sold did not provide as much value as the one millionth fax machine sold; the more fax machines are sold, the greater are the opportunities for sending and receiving information. Similarly, the value of the first REA based system for interorganizational use will be limited (although it will facilitate intra-organization needs), but the more companies that realize the value of these systems and build them, the more value will accrue to the first such system.
conclusIon In this chapter we presented a normative semantic model for designing integrated databases for enterprise information systems. This model was developed by McCarthy (1982) and its expanded form has been proposed as an enterprise ontology by Geerts and McCarthy (2002). This ontology is intended to serve as a foundation for integrated enterprise-wide and inter-organizational systems. To take full advantage of the semantically-rich ontological patterns and templates, the REA ontology must implemented with current advances in artificial intelligence technology and object-oriented
Semantically Modeled Databases in Integrated Enterprise Information Systems
database technology. Many of the current problems faced by companies who attempt to install ERP systems and integration tools such as EDI can be minimized by use of common semantic patterns that can be reasoned about by intelligent systems. Companies may integrate their systems without using identical business practices. Non-accounting researchers have conducted most of the existing research on semantic models, both at the individual level and at the organization level. Because REA originated in an accounting domain, non-accounting researchers have not embraced it (perhaps because of their lack of awareness of the REA ontology). We hope that by making information about this enterprise ontology more available to non-accounting researchers who are interested in semantically modeled information systems we will encourage more interest and participation in REA research.
reFerences Abrial, J. R. (1974). Data semantics. In J. W. Klimbie, & K. L. Koffeman (Eds.), Data Base Management. Amsterdam: North Holland Publishing Company, (pp. 1-60). Agarwal, R., Sinha, A. P., & Tanniru, M. (1996a). Cognitive fit in requirements modeling: A study of object and process methodologies. Journal of Management Information Systems, 13(2), 137–162. Agarwal, R., Sinha, A. P., & Tanniru, M. (1996b). The role of prior experience and task characteristics in object-oriented modeling: An empirical study. International Journal of Human-Computer Studies, 45, 639–667. doi:10.1006/ijhc.1996.0072 Andros, D., Cherrington, J. O., & Denna, E. L. (1992). Reengineer your accounting the IBM way. The Financial Executive, July/August, (pp. 28-31).
Batini, C., Ceri, S., & Navathe, S. B. (1992). Conceptual Database Design: An Entity Approach. Redwood City, CA: Benjamin Cummings. Bodart, F., Patel, A., Sim, M., & Weber, R. (2001). Should optional properties be used in conceptual modeling? A theory and three empirical tests. Information Systems Research, 12(4), 384–405. doi:10.1287/isre.12.4.384.9702 Cherrington, J. O., Denna, E. L., & Andros, D. P. (1996). Developing an event-based system: The case of IBM’s national employee disbursement system. Journal of Information Systems, 10(1), 51–69. David, J. S. (1995). An empirical analysis of REA accounting systems, productivity, and perceptions of competitive advantage. Unpublished doctoral dissertation, Michigan State University. David, J. S., Dunn, C. L., & McCarthy, W. E. (1999). Enterprise resource planning systems research: The necessity of explicating and examining patters in symbolic form. Working paper, Arizona State University. Dunn, C. L., Cherrington, J. O., & Hollander, A. S. (2005). Enterprise Information Systems: A Pattern-based Approach, 3rd edition. New York: McGraw-Hill Irwin. Dunn, C. L., & Grabski, S. V. (1998). The effect of field independence on conceptual modeling performance. Advances in Accounting Information Systems, 6, 65–77. Dunn, C. L., & Grabski, S. V. (2000). Perceived semantic expressiveness of accounting systems and task accuracy effects. International Journal of Accounting Information Systems, 1(2), 79–87. doi:10.1016/S1467-0895(00)00004-X
APICS. (1998). Defining enterprise resource planning. http://www.apics.org/OtherServices/ articles/defining.htm.
203
Semantically Modeled Databases in Integrated Enterprise Information Systems
Dunn, C. L., & Grabski, S. V. (2002). Empirical research in semantically modeled accounting systems. In V. Arnold & S. G. Sutton (Eds.), Researching Accounting as an Information Systems Discipline. Sarasota, FL: American Accounting Association (pp. 157-180). Dunn, C. L., & McCarthy, W. E. (1997). The REA accounting model: Intellectual heritage and prospects for progress. Journal of Information Systems, 11(1), 31–51. Dunn, C. L., & McCarthy, W. E. (2000). Symbols used for economic storytelling: A progression from artifactual to more natural accounting systems. Working paper, Michigan State University. Geerts, G. L., & McCarthy, W. E. (1991). Database accounting systems. IT and Accounting: The Impact of Information Technology. In B. C. Williams & B. J. Spaul (Eds.), London: Chapman & Hall, (pp. 159-183). Geerts, G. L., & McCarthy, W. E. (1999). An accounting object infrastructure for knowledgebased enterprise models. IEEE Intelligent Systems & Their Applications, (July-August), (pp. 89-94).
Geerts, G. L., & McCarthy, W. E. (2004). The ontological foundation of REA enterprise information systems. Working paper, Michigan State University. Gerard, G. J. (2005). The REA pattern, knowledge structures, and conceptual modeling performance. Journal of Information Systems, 19, 57–77. doi:10.2308/jis.2005.19.2.57 Gruber, T. (2008). Ontology. In Ling Liu and M. Tamer Özsu (Eds.) Encyclopedia of Database Systems, Springer-Verlag. Found online at http:// tomgruber.org/writing/ontology-definition-2007. htm (accessed 10/5/07). Haugen, R., & McCarthy, W. E. (2000). REA: A semantic model for internet supply chain collaboration. Presented at Business Object Component Design and Implementation Workshop VI: Enterprise Application Integration which is part of The ACM Conference on Object-Oriented Programming, Systems, Languages, and Applications, October 15-19, Minneapolis, Minnesota. Kim, Y. K., & March, S. T. (1995). Comparing data modeling formalisms. Communications of the ACM, 38(6), 103–115. doi:10.1145/203241.203265
Geerts, G. L., & McCarthy, W. E. (2000). Augmented intensional reasoning in knowledgebased accounting systems. Journal of Information Systems, 14(2), 127–150. doi:10.2308/ jis.2000.14.2.127
McCarthy, W. E. (1982). The REA accounting model: A generalized framework for accounting systems in a shared data environment. Accounting Review, 57(3), 554–578.
Geerts, G. L., & McCarthy, W. E. (2001). Using object templates from the REA accounting model to engineer business processes and tasks. The Review of Business Information Systems, 5(4), 89–108.
Nakamura, H., & Johnson, R. E. (1998). Adaptive Framework for the REA Accounting Model, Proceedings of the OOPSLA’98 Business Object Workshop IV, http://jeffsutherland.com/oopsla98/ nakamura.html.
Geerts, G. L., & McCarthy, W. E. (2002). An ontological analysis of the primitives of the extendedREA enterprise information architecture. International Journal of Accounting Information Systems, 3, 1–16. doi:10.1016/S1467-0895(01)00020-3
O’Leary, D. E. (2004). On the relationship between REA and SAP. International Journal of Accounting Information Systems, 5(1), 65–81. doi:10.1016/j. accinf.2004.02.004
204
Semantically Modeled Databases in Integrated Enterprise Information Systems
Rockwell, S. R., & McCarthy, W. E. (1999). REACH: automated database design integrating first-order theories, reconstructive expertise, and implementation heuristics for accounting information systems. International Journal of Intelligent Systems in Accounting Finance & Management, 8(3), 181–197. doi:10.1002/ (SICI)1099-1174(199909)8:33.0.CO;2-E Satoshi, H. (1999). The contents of interviews with the project manager of FDWH in IBM Japan. Proceedings of the 1999 SMAP Workshop, San Diego, CA. Scheer, A. W. (1998). Business Process Engineering: Reference Models for Industrial Enterprises. Berlin: Springer-Verlag. Sinha, A. P., & Vessey, I. (1999). An empirical investigation of entity-based and object-oriented data modeling. Proceedings of the Twentieth International Conference on Information Systems. P. De & J. DeGross (Eds.), Charlotte, North Carolina, (pp. 229-244). Swagerman, D. M., Dogger, N., & Maatman, S. (2000). Electronic markets from a semiotic perspective. Electronic Journal of Organizational Virtualness, 2(2), 22–42. Vessey, I., & Conger, S. A. (1994). Requirements specification: Learning object, process, and data methodologies. Communications of the ACM, 37(5), 102–113. doi:10.1145/175290.175305 Wakayama, T., Kannapan, S., Khoong, C. M., Navathe, S., & Yates, J. (1998). Information and Process Integration in Enterprises: Rethinking Documents. Norwell, MA: Kluwer Academic Publishers. Watson, E. E., & Schneider, H. (1999). Using ERP systems in education. Communications of the Association for Information Systems, 1(9).
Weber, R. (1986). Data models research in accounting: An evaluation of wholesale distribution software. Accounting Review, 61(3), 498–518. Weber, R. (1996). Are attributes entities? A study of database designers’ memory structures. Information Systems Research, 7(2), 137–162. doi:10.1287/isre.7.2.137
key terms And deFInItIons Business Process: A term widely used in business to indicate anything from a single activity, such as such as printing a report, to a set of activities, such as an entire transaction cycle; in this paper business process is used as a synonym of transaction cycle. (p. 20) Enterprise Resource Planning System: An enterprise wide group of software applications centered on an integrated database designed to support a business process view of the organization and to balance the supply and demand for its resources; this software has multiple modules that may include manufacturing, distribution, personnel, payroll, and financials and is considered to provide the necessary infrastructure for electronic commerce. (p. 2) Ontologically-Based Information System: An information system that is based upon a particular domain ontology, and the ontology provides the semantics inherent within the system. These systems facilitate organizational productivity and inter-organizational communication. (p. 3) Process Level Model: A second level model in the REA ontology that documents the semantic components of all the business process events. (p.20) Resources-Events-Agents (REA) Ontology: A domain ontology that defines constructs common to all enterprises and demonstrates how those constructs may be used to design a semantically modeled enterprise database. (p. 3)
205
Semantically Modeled Databases in Integrated Enterprise Information Systems
Semantically Modeled Database: A database that is a reflection of the reality of the activities in which an enterprise engages and the resources and people involved in those activities. The semantics are present in the conceptual model, but might not be readily apparent in the implemented database. (p. 3) Task Level Model: A third level model in the REA ontology is the most detailed level, which specifies all steps necessary for the enterprise to accomplish the business events that were included at the process level. (p. 6) Value Chain: The interconnection of business processes via resources that flow between them, with value being added to the resources as they flow from one process to the next. (p. 20)
endnotes 1
2
research is needed comparing these two semantic models with subsequent evaluations of commercially available ERP packages. The REA framework uses the term business process to mean a set of related business events and other activities that are intended to accomplish a strategic objective of an organization. In this view, business processes represent a high level of abstraction. Some non-REA views define business processes as singular activities that are performed within a business. For example, some views consider “process sale order” to be a business process. The REA view considers “sale order” to be a business event that is made up of specific tasks, and it interacts with other business events within the “Sales-Collection” business process.
An alternative semantic model has been presented by Scheer (1998). Additional
This work was previously published in Handbook of Research on Innovations in Database Technologies and Applications: Current and Future Trends, edited by Viviana E. Ferraggine, Jorge Horacio Doorn and Laura C. Rivero, pp. 221-239, copyright 2009 by Information Science Reference (an imprint of IGI Global).
206
207
Chapter 1.15
An Overview of OntologyDriven Data Integration1 Agustina Buccella Universidad Nacional del Comahue, Argentina Alejandra Cechich Universidad Nacional del Comahue, Argentina
IntroductIon New software requirements have emerged because of innovation in technology, specially involving network aspects. The possibility enterprises, institutions and even common users can improve their connectivity allowing them to work as they are at the same time, generates an explosion in this area. Besides, nowadays it is very common to hear that large enterprises fuse with others. Therefore, requirements as interoperability and integrability are part of any type of organization around the world. In general, large modern enterprises use different database management systems to store and search their critical data. All of these databases are very important for an enterprise but the different interfaces they possibly have make difficult their administration. Therefore, recovDOI: 10.4018/978-1-60566-242-8.ch051
ering information through a common interface becomes crucial in order to realize, for instance, the full value of data contained in the databases (Hass & Lin, 2002). Thus, in the ‘90s the term Federated Database emerged to characterize techniques for proving an integrating data access, resulting in a set of distributed, heterogeneous and autonomous databases (Busse, Kutsche, Leser & Weber, 1999; Litwin, Mark & Roussoupoulos, 1990; Sheth & Larson, 1990). Here is where the concept of Data Integration appears. This concept refers to the process of unifying data sharing some common semantics but originated from unrelated sources. Several aspects must be taken into account when working with Federated Systems because the main characteristics of these systems make more difficult the integration tasks. For example, the autonomy of the information sources, their geographical distribution and the heterogeneity among them, are some
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Overview of Ontology-Driven Data Integration
of the main problems we must face to perform the integration. Autonomy means that users and applications can access data through a federated system or by their own local system. Distribution (Ozsu & Valduriez, 1999) refers to data (or computers) spread among multiple sources and stored in a single computer system or in multiple computer systems. These computer systems may be geographically distributed but interconnected by a communication network. Finally, heterogeneity relates to different meanings that may be inferred from data stored in databases. In (Cui & O’Brien, 2000), heterogeneity is classified into four categories: structural, syntactical, system, and semantic. Structural heterogeneity deals with inconsistencies produced by different data models whereas syntactical heterogeneity deals with consequences of using different languages and data representations. On the other hand, system heterogeneity deals with having different supporting hardware and operating systems. Finally, semantic heterogeneity (Cui & O’Brien, 2000) is one of the most complex problems faced by data integration tasks. Each information source included in the integration has its own interpretation and assumptions about the concepts involved in the domain. Therefore, it is very difficult to determine when two concepts belonging to different sources are related. Some relations among concepts that semantic heterogeneity involves are: synonymous, when the sources use different terms to refer to the same concept; homonymous, when the sources use the same term to denote completely different concepts; hyponym, when one source contains a term less general than another in another source; and hypernym, when one source contains a term more general than another in another source; etc. In this paper we will focus on the use of ontologies because of their advantages when using for data integration. For example, an ontology may provide a rich, predefined vocabulary that serves as a stable conceptual interface to the databases and is independent of the database schemas; knowledge represented by the ontology may be
208
sufficiently comprehensive to support translation of all relevant information sources; an ontology may support consistency management and recognition of inconsistent data; etc. Then, the next section will analyze several systems using ontologies as a tool to solve data integration problems.
bAckGround Recently, the term Federated Databases has evolved to Federated Information Systems because of the diversity of new information sources involved in the federation, such as HTML pages, databases, filing, etc., either static or dynamic. A useful classification of information systems based on the dimensions of distribution and heterogeneity can be found in (Busse et al., 1999). Besides, this work defines the classical architecture of federated systems (based on Sheth & Larson (1990)) which is widely referred by many researches. Figure 1 shows this architecture. In the figure, the wrapper layer involves a number of modules belonging to a specific data organization. These modules know how to retrieve data from the underlying sources hiding their data organizations. As the federated system is autonomous, local users may access local databases through their local applications independently from users of other systems. Otherwise, to access the federated system, they need to use the user interface layer. The federated layer is one of the main components currently under analysis and study. Its importance comes from its responsibility to solve the problems related to semantic heterogeneity, as we previously introduced. So far, different approaches have been used to model this layer. In general they use ontologies as tools to solve these semantic problems among different sources (Stephens, Gangam & Huhns, (2004); Le, DiengKuntz & Gandon, (2004); Giunchiglia, Yatskevich & Giunchiglia, (2005), Buccella, Cechich & Brisaboa, 2003; Buccella, Cechich & Brisaboa, 2004).
An Overview of Ontology-Driven Data Integration
Figure 1. Architecture of federated systems
ontoloGy-bAsed dAtA InteGrAtIon The term ontology was introduced by Gruber (1993) as an “explicit specification of a conceptualization”. A conceptualization, in this definition, refers to an abstract model of how people commonly think about a real thing in the world; and explicit specification means that concepts and relationships of the abstract model receive explicit names and definitions. An ontology gives the name and description of the domain specific entities by using predicates that represent relationships between these entities. The ontology provides a vocabulary to represent and communicate domain knowledge along with a set of relationships containing the vocabulary’s terms at a conceptual level. Therefore, because of its potential to describe the semantics of information sources and to solve the heterogeneity problems, the ontologies are being used for data integration tasks. Recent surveys have emerged describing and analyzing proposals to ontology matching (Euzenat & Shvaiko, 2007; Shvaiko & Euzenat, 2005; Rahm & Bernstein, 2001). But some of these surveys only analyze methodologies and tools to build an integrated system, without analyzing how the whole system works. On the other hand,
surveys based on ontology-based systems for data integration can be found in the literature (Euzenat & Shvaiko, 2007; Kalfoglou & Schorlemmer, 2003; Wache et al., 2001). For example, the work presented by Euzenat & Shvaiko (2007) describe and analyze a widely set of ontology matching proposals. Although it is focused on methodologies to improve matchings, new approaches of complete integrated systems are also taked into account. Besides, more information with links to several approaches can be found at OntologyMatching.org2. Another example is (Wache et al., 2001) in which authors focus on some aspects of the use of ontologies, the language representation, mappings and tools. This work also classifies the use of ontologies into three approaches: single ontology approach, multiple ontology approach and hybrid ontology approach, as Figure 2 shows. We use this classification to categorize the use of ontologies by each proposal. At same time, with respect to the integration process and based on the approaches aforementioned, two new approaches (Calvanese & Giacomo, 2005; Calvanese, Giacomo & Lenzerini, 2001) have emerged for specifying mappings in an integrated system. One, of them, the global-as-view (GAV) approach, is based on the definition of global concepts as views over the sources, that is, each concept of the global view is mapped to a query over the
209
An Overview of Ontology-Driven Data Integration
Figure 2. Classification of the use of ontologies
sources. The other approach, local-as-view (LAV), is based on the definition of the sources as views over the global view. Thus, the information the sources contain is described in terms of this global view. In this section we will focus on how ontologybased systems address the semantic heterogeneity problems. We have investigated many systems, which follow some of the approaches of Figure 2, considering relevant aspects respect to the use of ontologies. They are, reusability, changeability and scalability. These aspects allow us analyze whether quality properties are improved due a particular use of ontologies in a given system. The use of ontologies refers to how ontologies help solve data integration problems. Commonly, the systems show how ontologies are used to solve the integration problems and how ontologies interact with other architectural components. In this aspect the systems describe its ontological components and the ways to solve the different semantic heterogeneity problems. Then, we characterize this feature by means of three quality characteristics: reusability, changeability and scalability. Reusability refers to the ability of reusing the ontologies, that is, ontologies defined to solve other problems can be used in a system because of either the system supports different ontological languages (and) or define local
210
ontologies. Changeability refers to the ability of changing some structures within an information source, without producing substantial changes in the system components. Finally, scalability refers to the possibility of easily adding new information sources to the integrated system. In order to give a wide view of ontology-based systems, we divide the approaches into two branches. On one branch, we briefly analyze systems proposed during the 90’s decade and no longer under research. Table 1 shows our classification taking into account some of these systems. The columns of the table contain the three relevant aspects within the use of ontologies aforementioned. A more extensive analysis of these systems can be found in (Buccella, Cechich & Brisaboa, 2005a; Buccella, Cechich & Brisaboa, 2005b). As we can see, SIMS (Ambite et al., 1997) and Carnot (Woelk, Cannata, Huhns, Shen & Tomlinson, 1993) are scored as “Not supported” in the three columns. In the first column this value means that both ontologies are not reusable because both define a global ontology or a single ontology approach in order to integrate data. The same happens with the second column, changeability, because no support is given by these systems to bear changes in the information sources. When one source changes, the global ontology must be rebuilt.
An Overview of Ontology-Driven Data Integration
Table 1. Ontology-based systems during 90’s decade Approach Single Ontology Approach Multiple Ontology Approach
Hybrid Ontology Approach
Ontology-based Systems
Reusability
Changeability
Scalability
Not supported
Not supported
Not supported
Fully-supported
Supported
Supported
KRAFT (Preece, Hui, Gray, Jones & Cui, 1999)
Supported
Semi-supported
Semi-supported
COIN (Firat, Madnick & Grosof, 2002; Goh et al., 1999)
Supported
Supported
Supported
InfoSleuth (Bayardo et al., 1997)
Supported
Supported
Supported
SIMS (Ambite et al., 1997) Carnot (Woelk, Cannata, Huhns, Shen & Tomlinson, 1993) OBSERVER (Mena, Kashyap, Sheth & Illarramendi, 2000)
Finally, scalability, is not supported because again by adding a new information source, the rebuilding of the global ontology is generated. The problem of dealing with the global ontology approach is that we must manage a global integrated ontology, which involves administration, maintenance, consistency and efficiency problems that are very hard to solve. On the other hand, OBSERVER system (Mena, Kashyap, Sheth & Illarramendi, 2000) uses the multiple ontologies approach to alleviate the problems presented by the single approach. As each ontology can be created independently, the reusability is fully supported. However, changeability and scalability are semi-supported because changes in local ontologies also generate changes in the mappings. Finally, the three last systems following the hybrid ontology approach are scored better. Although they propose different and novel ways to improve these characteristics, all these mechanisms conclude in good results. For example, in InfoSleuth (Bayardo et al., 1997), to add a resource to the system it only needs to have an interface to advertise its services and let other agents make use of them immediately. On the other branch, we analyze more recent systems widely referred by researchers, although some of them are still under development. We use the same characteristics in order to analyze the evolution of the systems. For example, we notice
that most of the new systems do not follow the single ontology approach. The inherent disadvantages of this type of systems make difficult to deal with them. However, some systems such as MediWeb (Arruda, Lima & Baptista, 2002) and DIA (Medcraft, Schiel & Baptista, 2003) apply this approach but without better results. Only in the case of DIA the changeability and scalability can be improved because the global ontology does not need to be modified when a source is added; only an ontology-schema matching table is necessary to add a new database to the integrated system. As a new proposal within the multiple ontology approach we can cite OntoGrate (Dou & LePendu, 2006). Here, first order logic inference is used to find mappings among underlying ontologies. By a set of rules and user assistance, databases (that are the sources of the system) are translated to source ontologies. Then, these ontologies are merged by applying bridging axioms and a reasoner. In this system, reusability is not supported because the ontologies are created based on the underlying databases by using a specific language and particularities. For changeability or scalability, the reasoning process must be re-executed over the information changed or added. The speed of this process will depend on how large the ontologies are. Recently, new approaches have emerged by applying the advantages the hybrid ontology approach provides. Some novel approaches are 211
An Overview of Ontology-Driven Data Integration
MOBS (McCann, Doan, Varadarajan & Kramnik, 2003), AutoMed (Zamboulis & Poulovassilis, 2006) and Alasoud et.al (Alasoud, Haarslev & Shiri, 2005). In MOBS (Mass Collaboration to Build Systems) something different is proposed. Instead of creating a global ontology based on underlying domain ontologies, all of them are created at the same time. That is, in this first step the global ontology contains general concepts users want to recover instead of information extracted from underlying sources. Then, initial mappings are defined by using a random assignment or a schema matching tool. Finally, the mass collaboration of this approach is in the user feedback needed to readjust the system and to define the correct mappings. In this way, the ontologies are reusable because they can be used by this system or by another one. But changeability and scalability can be tedious because the user collaboration process must be re-run each time something is modified or added. In AutoMed and Alasoud et.al approaches materialized views can be defined. In the case of Alasoud et.al an integrated view (as a global ontology) is partially materialized to improve query answering time, and to take advantages of fully materialized and fully virtual approaches. In this way, the answers to queries are retrieved from materialized as well as virtual views. Besides, the mappings between concepts in the source ontologies are described in terms of the integrated view. Thus, adding a new ontology to this system does not require changes in the global view. Considering the approach to specify mappings, the GAV approach is generally chosen by the proposals. Only Alasoud et.al and OntoGrade systems define something similar to the LAV approach. Advantages and disadvantages of each approach have to be considered when an integration process is initiated (Cali et.al, 2002). For example, in the LAV approach restrictions over the sources can be defined easily. On the contrary, in the GAV approach it is easier to define restrictions over the global view. With respect to scalability, following
212
the LAV approach, adding a new source to the integrated system does not require changes in the global view. On the other hand, in the GAV approach when a new source is added, changes over the global view are required. However, scalability over the global view is easier in this last approach. Finally, considering query processing, LAV requires reasoning mechanisms in order to answer them. Contrarily, in GAV conventional mechanisms can be implemented. In order to take advantage of the benefits of both approaches, a new approach, called globallocal-as-view (GLAV), has been proposed by (Friedman, Levy & Millstein, 1999). Although the query process is out of the scope of this work, we consider that a study of query capabilities of each proposal shoud be performed. For example, in (Calvanese, Giacomo & Lenzerini, 2001) two case studies are presented in order to show the need of suitable techniques for query answering.
Future trends Several systems compared here are still in a development stage and, as we have explained, some problems should be solved in order to reach a good integration. Recent proposals, which are based on the old ones, try to improve several aspects in order to propose an integral solution. For example, new systems do not apply a global ontology approach such as SIMS did, due to problems as scalability and changeability. Other proposals applying the others approaches (multiple and hybrid), intent to solve some characteristics of the global approach, but additional efforts have to be done to reach them. For example, reusability deserves more attention because several ontologies created for the semantic web might be part (some day) of an integrated system. In this way, thinking of particular ontology with specific requirements, as in OntoGrate, is not such a good idea.
An Overview of Ontology-Driven Data Integration
There are several proposals (Euzenat & Shvaiko, 2007), which were not analyzed here, only implementing methods for semantic matching. These proposals are complementary because they propose mechanisms to create the global view or the mappings in an efficient way. Euzenat & Shvaiko (2007) presents a survey that is mainly focused on ontology matching proposals. The task of ontology matching involves the process of finding relationships or correspondences between entities of different ontologies. In this paper these proposals are not considered because we do not focus on this type of problems but on the functionality of the whole integrated system. Nevertheless, semantic matching is a crucial area that must be analyzed to assess the processes the system involves. Currently, the task of building an integrated system is not easy. By trying to improve some essential aspects, other equally important ones can be forgotten. For example, in order to improve methods for finding mappings, more expressive ontologies are created. But aspects as understandability and complexity are not taken into account. Furthermore, new systems must consider all these aspects in order to give an integral solution.
conclusIon Today, semantic heterogeneity implies many complex problems that are addressed by using different approaches, including ontologies, which give a higher degree of semantics to the treatment of data involved in the integration. We have analyzed several systems accordingly to the use of the ontologies and we have evaluated three important aspects also related with them – reusability, changeability and scalability. Each system has implemented its own solution with advantages and disadvantages, but some elements in common and some original aspects can be found. There are other important aspects we could have considered for characterizing current
ontology-based systems. Among them, we should mention the use of automated tools for supporting the manipulation of ontologies. Of course, further characterization is needed to completely understand the use of ontologies for data integration. Hope our work will motivate the reader to deeper immerse in this interesting world.
reFerences Alasoud, A., Haarslev, V., & Shiri, N. (2005). A hybrid approach for ontology integration. Proceedings of the 31st VLDB Conference. Trondheim, Norway. Ambite, J. L., Arens, Y., Ashish, N., Knoblock, C. A., & collaborators (1997). The SIMS manual 2.0. Technical Report, University of Southern, California. December 22. Retrieved August 01, 2007, from http://www.isi.edu/sims/papers/simsmanual.ps. Arruda, L., Baptista, C., & Lima, C. (2002). MEDIWEB: A mediator-based environment for data integration on the Web. Databases and Information Systems Integration. ICEIS, 34-41. Baru, C. (1998). Features and requirements for an XML view definition language: Lessons from XML information mediation. In W3CWorkshop on Query Language (QL´98). Boston. Bayardo, R. J., Jr., Bohrer, W., Brice, R., Cichocki, A., Fowler, J., Helal, A., et al. (1997). InfoSleuth: Agent-based semantic integration of information in open and dynamic environments. Proceedings ACM SIGMOD International Conference on Management of Data, 195-206. Buccella, A., Cechich, A., & Brisaboa, N. R. (2003). An ontology approach to data integration. Journal of Computer Science and Technology, 3(2), 62–68.
213
An Overview of Ontology-Driven Data Integration
Buccella, A., Cechich, A., & Brisaboa, N. R. (2004). A federated layer to integrate heterogeneous knowledge. In VODCA’04 First International Workshop on Views on Designing Complex Architectures. Bertinoro, Italy. Electronic Notes in Theoretical Computer Science, Elsevier Science B.V, 101-118. Buccella, A., Cechich, A., & Brisaboa, N. R. (2005a). Ontology-based data integration: Different approaches and common features. Encyclopedia of Database Technologies and Applications. Rivero L., Doorn J., and Ferraggine V. Editors. Idea Group. Buccella, A., Cechich, A., & Brisaboa, N. R. (2005b). Ontology-based data integration methods: A framework for comparison. Revista Colombiana de Computación 6(1). Busse, S., Kutsche, R. D., Leser, U., & Weber, H. (1999). Federated information systems: Concepts, terminology and architectures. Technical Report. Nr. 99-9, TU Berlin. Calí, A., Calvanese, D., Giacomo, D., & Lenzerini, M. (2002). On the expressive power of data integration systems. In Proceedings of the 21st Int. Conf. on Conceptual Modeling, ER, 2503 of Lecture Notes in Computer Science. Springer. Calvanese, D., & Giacomo, G. D. (2005). Data integration: A logic-based perspective. AI Magazine, 26(1), 59–70. Calvanese, D., Giacomo, G. D., & Lenzerini, M. (2001). A framework for ontology integration. In SWWS, 303-316. Cui, Z., & O’Brien, P. (2000). Domain ontology management environment. In Proceedings of the 33rd Hawaii International Conference on System Sciences. Dou, D., & LePendu, P. (2006). Ontology-based Integration for relational databases. SAC, 06, 461–466.
214
Euzenat, J., & Shvaiko, P. (2007). Ontology matching. Heidelberg: Spinger-Verlag, (DE), isbn 3-540-49611-4. (pp. 341). Firat, A., Madnick, S., & Grosof, B. (2002). Knowledge integration to overcome ontological heterogeneity: Challenges from financial information systems. Twenty-Third International Conference on Information Systems, ICIS. Friedman, M., Levy, A., & Millstein, T. (1999). Navigational plans for data integration. AAAI/ IAAI, 67-73. Giunchiglia, F., Yatskevich, M., & Giunchiglia, E. (2005). Efficient semantic matching. In A. Gomez-Perez and J. Euzenat, editors, ESWC 2005, volume LNCS 3532, 272-289. Springer-Verlag. Goh, C. H. (1996). Representing and reasoning about semantic conflicts in heterogeneous information sources. Phd, MIT, Massachusetts Institute of Technology, Sloan School of Management. Goh, C. H., Bressan, S., Siegel, M., & Madnick, S. E. (1999). Context interchange: New features and formalisms for the intelligent integration of information. ACM Transactions on Information Systems, 17(3), 270–293. doi:10.1145/314516.314520 Gruber, T. (1993). A translation approach to portable ontology specifications. Knowledge Acquisition, 5(2), 199–220. doi:10.1006/knac.1993.1008 Hass, L., & Lin, E. (2002). IBM federated database technology. Retrieved August 01, 2005, from http://www-106.ibm.com/developerworks/ db2/library/techarticle/0203haas/0203haas.html. Kalfoglou, Y., & Schorlemmer, M. (2003). Ontology mapping: The state of the art. The Knowledge Engineering Review, 18(1), 1–31. doi:10.1017/ S0269888903000651
An Overview of Ontology-Driven Data Integration
Le, B. T., Dieng-Kuntz, R., & Gandon, F. (2004). On ontology matching problems for building a corporate Semantic Web in a multi-communities organization. In ICEIS 2004 Software Agents and Internet Computing, 236-243. Litwin, W., Mark, L., & Roussoupoulos, N. (1990). Interoperability of multiple autonomous databases. ACM Computing Surveys, 22(3), 267–293. doi:10.1145/96602.96608 McCann, R., Doan, A., Varadarajan, V., & Kramnik, A. (2003). Building data integration systems via mass collaboration. International Workshop on the Web and Databases. California: WebDB Medcraft, P., Schiel, U., & Baptista, P. (2003). DIA: Data integration using agents. Databases and Information Systems Integration. ICEIS, 79-86. Mena, E., Kashyap, V., Sheth, A., & Illarramendi, A. (2000). Observer: An approach for query processing in global information systems based on interoperation across pre-existing ontologies. Kluwer Academic Publishers, Boston, (pp. 1-49). Ozsu, M. T., & Valduriez, P. (1999). Principles of distributed database systems, (2nd ed.), Prentice Hall. Preece, A., Hui, K., Gray, A., Jones, D., & Cui, Z. (1999). The KRAFT architecture for knowledge fusion and transformation. In 19th SGES International Conference on Knowledge-based Systems and Applied Artificial Intelligence (ES’99). Berlin: Springer. Rahm, E., & Bernstein, A. (2001). A survey of approaches to automatic schema matching. [New York: Springer-Verlag Inc.]. The VLDB Journal, 4(10), 334–350. doi:10.1007/s007780100057 Sheth, A. P., & Larson, J. A. (1990). Federated database systems for managing distributed, heterogeneous and autonomous databases. ACM Computing Surveys, 22(3), 183–236. doi:10.1145/96602.96604
Shvaiko, P., & Euzenat, J. (2005). A survey of schema-based matching approaches. Journal of Data Semantics, IV, 146–171. doi:10.1007/11603412_5 Stephens, L., Gangam, A., & Huhns, M. (2004). Constructing consensus ontologies for the Semantic Web: A conceptual approach. In World Wide Web [Kluwer Academic Publishers.]. Internet and Web Information Systems, 7, 421–442. Wache, H., Vögele, T., Visser, U., Stuckenschmidt, H., Schuster, G., Neumann, H., & Hübner, S. (2001). Ontology-based integration of information - A survey of existing approaches. In Proceedings of IJCAI-01 Workshop: Ontologies and Information Sharing, Seattle, WA, (pp. 108-117). Woelk, D., Cannata, P., Huhns, M., Shen, W., & Tomlinson, C. (1993). Using Carnot for enterprise information integration. Second International Conf. Parallel and Distributed Information Systems, 133-136. Zamboulis, L., & Poulovassilis, A. (2006). Information sharing for the Semantic Web — A schema transformation approach. In Proceedings of DISWeb06, CAiSE06 Workshop Proceedings, 275-289.
key terms And deFInItIons Data Integration: Process of unifying data that share some common semantics but originate from unrelated sources. Distributed Information System: A set of information systems physically distributed over multiple sites, which are connected with some kind of communication network. Federated Database: Idem FIS, but the information systems only involve databases (i.e. structured sources). Federated Information System (FIS): A set of autonomous, distributed and heterogeneous information systems, which are operated together to generate a useful answer to users. 215
An Overview of Ontology-Driven Data Integration
Heterogeneous Information System: A set of information systems that differs in syntactical or logical aspects like hardware platforms, data models or semantics. Ontological Reusability: The ability of creating ontologies that can be used in different contexts or systems. Ontological Changeability: The ability of changing some structures of an information source without producing substantial changes in the ontological components of the integrated system. Ontological Scalability: The ability of easily adding new information sources without generate substantial changes in the ontological components of the integrated system. Ontology: It provides a vocabulary to represent and communicate knowledge about the domain and a set of relationship containing the terms of the vocabulary at a conceptual level.
Ontology Matching: the process of finding relationships or correspondences between entities of different ontologies. Semantic Heterogeneity: Each information source has a specific vocabulary according to its understanding of the world. The different interpretations of the terms within each of these vocabularies cause the semantic heterogeneity.
endnotes 1
2
This work is partially supported by the UNCOMA project 04/E059. http://www.ontologymatching.org
This work was previously published in Handbook of Research on Innovations in Database Technologies and Applications: Current and Future Trends, edited by Viviana E. Ferraggine, Jorge Horacio Doorn and Laura C. Rivero, pp. 471-480, copyright 2009 by Information Science Reference (an imprint of IGI Global).
216
217
Chapter 1.16
A Fundamental SOA Approach to Rebuilding Enterprise Architecture for a Local Government after a Disaster Zachary B. Wheeler SDDM Technology, USA
AbstrAct As a result of Hurricane Katrina, the destruction of property, assets, documentation, and human life in the Gulf Port has introduced a myriad of challenging issues. These issues involve human, social, government, and technological concerns. This chapter does not address the many immediate human and social concerns brought forth from a natural disaster or major terrorist attack (NDMTA); this chapter addresses a small but significant problem of re-establishing or laying the groundwork for an enterprise architecture for local government during the response phase of the disaster. Specifically, it addresses constructing a high-level data model and fundamental DOI: 10.4018/978-1-60566-330-2.ch015
SOA, utilizing the remaining local assets, XML (extensible markup language), and Web services.
IntroductIon Disaster preparedness, response, and recovery received a lot of attention immediately after the terrorist attacks of 9/11 and eventually faded from the forefront of attention after the invasion of Iraq and the global war on terrorism. However, recent natural disasters such as the Indonesian Tsunami in 2004 and the devastating Hurricane Katrina in Louisiana have refocused attention on these three prominent areas. Specifically, the lack of preparedness, inadequate response, and slow recovery has burdened local, state, and federal governments as well as citizens.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Fundamental SOA Approach to Rebuilding Enterprise Architecture for a Local Government after a Disaster
The presented enterprise approach and implementation process covers an area that is void in the disaster preparedness and response phase; however, it is applicable in each phase: preparedness, response, and recovery. It is recommended that the presented approach be included as part of the disaster preparedness phase, implemented in the response phase, and eventually expanded in the recovery phase. The approach is unique because the enterprise implementation takes place during the actual response phase of the disaster and utilization of the fundamental SOA leads to further expansion during and after the recovery phase. The approach introduced in this chapter takes advantage of the Zachman framework system model perspective by utilizing Web services on a local level and introducing a practical but efficient method for populating the initial data model. A series of basic assumptions are introduced based on information regarding the recent Gulf Port, Hurricane Andrew, Indonesian Tsunami, and 9/11 disaster events. These assumptions are based on the physical, environmental, and technological conditions immediately after disaster strikes. The assumptions are there will be limited or nonexistent landline and wireless communication, a lack of ability to use generators for power source, limited or nonexistent Internet and intranet, major IT system destruction, and the incapacitation of local government services. This chapter addresses the problem of reestablishing or laying the groundwork for an enterprise architecture for local government during the response phase of the disaster. Specifically, it addresses constructing a high-level data model and fundamental SOA by utilizing the remaining local assets, XML, and Web services.
bAckGround The fundamental role of local government is to protect the people, provide basic human services, and assist in strengthening communities. This is
218
typically accomplished by establishing various local agencies and departments. These departments are structured to provide essential services for the community. For instance, the fire department role is to help citizens in immediate danger due to fire, gas, or chemical hazard. The role of the health department is to establish policy, programs, and standards regarding health and health related issues. An additional role of the health department is to assist citizens in obtaining basic health care services. Each established department or agency has a role in assisting the community and its residents by providing relevant services. In a typical municipality, each agency has a database of information relating to the citizens and the services provided to the citizen by the agency. For instance, the police department maintains a database of criminals, criminal activity, and citizen complaints. The Department of Human Services maintains a database of child immunization records. In short, each agency maintains a database and application system to enter data, process data, and execute business rules. However, in the wake of an NDMTA, these systems along with other IT assets are destroyed or rendered useless. For instance, Hurricane Katrina destroyed most of New Orleans including property, buildings, human life, landline and mobile communications, Internet services, intranet services, and essentially incapacitated local government. In the terror attacks of 9/11, the same asset destruction was prevalent within a specified geographic area. Hurricane Andrew wreaked havoc among Florida communities and followed the same line of asset destruction and local government incapacitation as Hurricane Katrina. In each of these cases, major response and rebuilding were needed to help reestablish public safety, government, and services to the remaining citizens. This approach suggests that reestablishing a basic framework for IT services can be facilitated during the response phase of a disaster. In that regard, the proposed approach is unique in that the role of rebuilding typically takes place during the recovery phase (University of Florida, 1998).
A Fundamental SOA Approach to Rebuilding Enterprise Architecture for a Local Government after a Disaster
The extended Zachman Framework system model perspective will be utilized to establish highlevel data elements for the model. The utilization of Web services will be used to lay down a basic framework for a fundamental service oriented architecture that can be extended to an enterprise level once essential government services have been restored. In addition, a data collection process is provided for initial population of the primary data elements from the remaining survivors.
the system model and Zachman In the initial framework provided by Zachman (1987), he identifies five different perspectives of an enterprise architecture, three views of the enterprise, and introduces the six questions pertaining to an enterprise. The six questions are what, how, where, who, when, and why. (Table 1) Zachman provides a clear and concise identification of the various views of an enterprise and shows how each view is proper and correct. In 1992, the Zachman framework was extended by Zachman and Sowa (1992). In addition to answering the final three questions, they introduce the conceptual graph to represent the ISA and replace the “model of the information system” with the more generic system model reference for row 3 or the designer perspective. Hence, the various perspectives identified by Zachman are scope, enterprise model, system model, technology model, and components. Our perspective will cover the system model or designer perspective. Table 1. Zachman’s enterprise questions Zachman’s Six Enterprise Questions What?
What entities are involved?
How?
How they are processed?
Where?
Where they are located?
Who?
Who works with the system?
When?
When events occur?
Why?
Why these activities are taking place?
In the conclusion, the what, how, and where questions of the ISA will be answered.
mAIn tHrust oF tHe cHAPter basis for a conceptual data model The ISA system model perspective represents the system analyst role in information technology. The system analyst is responsible for determining the data elements and functions that represent the business entities and processes. Zachman suggests introducing all of the entities; however, the construction of all data elements, processes, and functions for a local government would be beyond the scope of this chapter, therefore, a high-level perspective for core data elements utilized during the response phase will be presented.
Primary data elements One of the main priorities of local government is to provide services to the citizens of the community. Regardless of the service provided, most government agencies interact with its residents and maintain some form of database of citizen information. In a disaster area, the citizens of the community are the disaster survivors. From a data acquisition perspective, we can obtain valuable information from the survivors and with this information begin to develop a conceptual data model for the emerging enterprise. Typically, the conceptual data model does not show actual data details of the entities. Instead, the conceptual data model provides a high-level entity view using the entity relationship diagram (ERD) (Rob & Coronel, 2002). Entity details will be provided in tabular format for clarity; however, the ERD will only show the entities and the defined relationships. Utilizing the following assumptions, we can define each remaining citizen as unique: (Table 2, Table 3, Table 4, and Table 5)
219
A Fundamental SOA Approach to Rebuilding Enterprise Architecture for a Local Government after a Disaster
Table 5. Police entity object relating to person
Table 2. Person entity details PERSON
POLICE
Unique_Id
Not Null
Unique_Id
Not Null
Unique_ID_Type
Not Null
Unique_Id_Type
Not Null
First_Name
Not Null
Arrested(Y,N)
Not Null
Officer_Id
Not Null
Middle_Name Last_Name
Not Null
Comments
Name_Suffix (i.e. Jr, Sr, etc….)
Crime_Id
Date_of_Birth
Not Null
Sex
Not Null
Status(Living,Deceased)
Not Null
Phone(Optional)
• •
Address_Id
Table 3. Address entity details ADDRESS Address_Id
Not Null
Street_Number
Not Null
• •
Not Null
Every entity has a name (names are not unique). Every entity has or had an associated address (address is not unique). Every entity has a sex (unique). Every entity will have an identifying unique number (ID card, green card, federal employee identification number, social security card, or driver’s license).
Prefix Street_Name
Not Null
Street_Type
Not Null
PostDir City
Not Null
State
Not Null
Zip Current_Address (Y,N)
Not Null
Table 4. Essential local government agencies Essential Department/Agencies Police Department
for maintaining order and protecting the people from physical harm
Department of Health
for maintaining control and administering of basic health services including disease and disease outbreak
Emergency Medical Services
for assisting in medical data collection and medical services
Department of Public Works
for cleaning and clearing of debris, corpses and other related health hazards
Fire Department
for maintaining fire, gas, and chemical controls and basic rescue operations
220
Note: Newborns or infants will be given a temporary generic unique id if they do not have a SSN. If we further assume that each remaining survivor (citizen) has an associated address then we can define the following address entity. We are working under the assumption that local assets and asset information have been destroyed, which includes the destruction of roads, streets, bridges, highways, and previously existing addresses. Thus, when the data collection process begins, during the response phase, local officials, or management can glean a geographic representation of survivors and establish a basic address information repository. During the recovery phase, old and new street, road, bridge, highway and address information will be added to the system thus creating a complete address reference or even an address database. For instance, an entity that contains parcel information (square, suffix, and lot) and an instance that contains ownership information (owner name, owner address, etc…) will be needed, however, during the response
A Fundamental SOA Approach to Rebuilding Enterprise Architecture for a Local Government after a Disaster
phase, only the address entity defined above is necessary. The person and address entities are termed primary data elements.
secondary data elements: extending the core elements In the event of NMDATA, there must be a continuation of basic essential services for the remaining citizens. These essential services required during a disaster, according to Davis (1998), are public safety, public works, and health services. This position is further bolstered by our knowledge of the recent events in the Gulf Port and the establishment of the following five essential services for that particular region. (Table 6, Table 7, Table 8. and Table 9) Based on the five essential departments, five basic data elements can be identified. These essential data elements are termed secondary data elements. Although there are more possible data elements than presented here, an expanded view of potential secondary elements is provided for clarity. Now that the primary and secondary elements have been identified, we have enough high-level Table 6. Health services entity object relating to person
data elements to begin the construction of our conceptual data model. In the overall enterprise, entities can be identified and added as service agencies are added or new requirements are determined.
Web servIces The construction of our enterprise architecture, from a technology perspective, relies on the Table 7. EMS entity object relating to person EMS Unique_Id
Not Null
Unique_Id_Type
Not Null
Service_Provided_Id
Not Null
EMS_ID
Not Null
Comments Service_Provided_Id
Not Null
Table 8. Public works entity object relating to person and address PUBLIC WORKS Work_Order_Id
Not Null
Unique_Id Unique_Id_Type Address_Id
HEALTH Unique_Id
Not Null
Unique_Id_Type
Not Null
Temperature
Not Null
Eyes(Normal,Dialated)
Not Null
Blood_Pressure_Systollic
Not Null
Blood_Pressure_Diastollic
Not Null
Heart_Rate
Not Null
Comments
Table 9. Fire department entity object relating to person and address FIRE Call_Id
Not Null
Recommendations
Response_Unit_Id
Not Null
Comments
Address_Id
Not Null
Treatment
Unique_Id
Medicine_Prescribed
Unique_Id_Type
Disease_Id
Comments
221
A Fundamental SOA Approach to Rebuilding Enterprise Architecture for a Local Government after a Disaster
utilization of the data model, Web services, and SOA. In our approach, we take advantage of three different definitions of a Web service while saliently maintaining that a Web service, based on the Web services architecture, is considered a software system (Guruge 2004). •
•
•
• • • •
•
Definition 1: Web services are modular, self-contained “applications” or application logic developed per a set of open standards (Guruge, 2004). Definition 2: Web services are extensible markup language (XML) application mapped to programs, objects, or databases or to comprehensive business functions (Newcomer, 2002) Definition 3: A Web service is a particular implementation of a protocol (SOAP), Web services description language (WSDL), and universal description discovery and integration (UDDI) (Fermantle, 2002) where SOAP Uses a RPC or a request-response mechanism based on HTTP. Utilizes an XML message format that contains an address, possible header, and body. Contains one or more elements. ° The elements are defined using common interoperable data formats (integers, strings, and doubles). ° The parameters are maybe encoded as child elements of a common parent whose name indicates the operation and whose namespace indicates the service. Can be sent over a common transport typically-HTTP.
Wsdl • •
222
Offers the ability to describe the inputs and outputs of a Web service. Allows a Web service to publish the interface of a service, thus if a client sends a SOAP message in format A to the service,
it will receive a reply in format B. The WSDL has two basic strengths: ° It enforces the separation between the interface and implementation. ° WSDL is inherently extensible.
uddI •
A discovery mechanism used to discover available services.
Although a Web service is a particular implementation of a protocol (SOAP), Web services description language (WSDL) and UDDI, the Web service is composed of one or more independent services. A service represents a particular function of the system and has a well-defined, formal interface called its service contract that: • •
Defines what the service does and Separates the services externally accessible interface from the services technical implementation (Newcomer 2002).
For instance, a Web service can contain a service that performs the function of adding data, another service that performs the function of retrieving data, and another service that performs the function of generating reports for management. A service can be either an atomic (simple) or a composite (complex) service. An atomic service does not rely on other services and are usually associated with straightforward business transactions or with executing data queries and data updates (Newcomer, 2002). A composite service uses other services, has a well-defined service contract, is registered in the service registry, can be looked up via the service registry, and can be invoked like any other service provider (Newcomer, 2002). Regardless of the service type (atomic or composite), the services are required to satisfy the following basic requirements (Fermantle, 2002):
A Fundamental SOA Approach to Rebuilding Enterprise Architecture for a Local Government after a Disaster
•
•
•
Technology neutral: Each service is nontechnology dependent and can be invoked through the standardized lowest common denominator technologies. Loosely coupled: Each service has a life of its own, each service remains independent of all other services, and each service does not have knowledge about other services. Support location transparency: Services should have their definition and location information stored in a repository such as UDDI and is accessible by a variety of clients that can locate and invoke the services irrespective of their location.
the basic services During the response phase and part of the recovery phase, several assumptions are made, such as limited landlines, limited mobile communications, and limited Internet and intranet services. The main objective, however, is to form a basic framework using Zachman’s framework (system model perspective), Web services, and service-oriented architecture. By maintaining our focus on basic services for the Web service, a foundation is created for extending our Web services to a service oriented architecture later in the recovery phase. If we utilize the best practice approach of Krafzig, Banke, and Slama (2004), we can identify two crucial basic service types: simple data-centric services and logic-centric services. A data-centric service is used to handle data manipulation, data storage, and data retrieval (Krafizig, Banke, & Slama 2004). We can easily incorporate logiccentric services at a later date to handle business processing and application logic. In a data centric service, an entity can be encapsulated into a service (Krafizig et al., 2004). This encapsulation acts as data layer and all services developed in the future will have to access these services to access and manipulate the data. In this chapter, the primary data elements are wrapped into services and then a composite service is created that utilizes the simple
services of person and address. An example is presented in the following screen shot for clarity. The PersonAddress Composite service will be used in the initial data collection process for the disaster survivors. The SOAP XML message format representation for the PersonAddress_Composite service is provided for clarity. POST /Primary_Core_Service/Service1.asmx HTTP/1.1 Host: localhost Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: “http://tempuri.org/ Primary_Core_Service/Service1/ PersonAddress_Composite” string string string string string string 223
A Fundamental SOA Approach to Rebuilding Enterprise Architecture for a Local Government after a Disaster
string string string string int string string string string string string string string string HTTP/1.1 200 OK Content-Type: text/xml; charset=utf-8 Content-Length: length string Individual person and address services were necessary for the initial data population and to make data available to entities and agencies as they are developed. For instance, the police may spot a crime taking place at a particular address thus they must be able to retrieve or add that address to the database thereby identifying the crime location. In another instance, the DMV will require basic person and address data for license issuance, fines, and motor vehicle infractions. The creation of individual and composites services, using data centric services, for each of the essential agencies can be generated for immediate data collection and tracking purposes. Later in the recovery phase, logic centric services can be integrated to provide business rule processing. In the example, below the services are extended to include the Department of Health. The SOAP message format for the Health Service and PersonHealth_Service is provided below for clarity POST /Primary_Core_Service/Service1.asmx HTTP/1.1 Host: localhost Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: “http://tempuri.org/ Primary_Core_Service/Service1/ Health_Service”
A Fundamental SOA Approach to Rebuilding Enterprise Architecture for a Local Government after a Disaster
int string string double double double string int string string string string int HTTP/1.1 200 OK Content-Type: text/xml; charset=utf-8
Content-Length: length int string string double double double string int string string string string int 225
A Fundamental SOA Approach to Rebuilding Enterprise Architecture for a Local Government after a Disaster
Composite Person_Health Service POST /Primary_Core_Service/Service1.asmx HTTP/1.1 Host: localhost Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: “http://tempuri.org/ Primary_Core_Service/Service1/ PersonHealth_Service” string string string string string string string string string string 226
Id>
int string These elements will be simplified into:where the inner parentheses are dropped and each element has at most one operator.
transition tree construction As shown in Figure 1, the DTD simplification step is followed by the construction of transition trees for the simplified DTD. In this step, a simplified DTD is split into substructures reorganized as trees that we call transitiontrees. These latter facilitate the relational schema generation from the initial DTD. Each transition tree has a root node, intermediate nodes, terminal nodes (leaves) all of which are connected by directed arcs. The root node and intermediate nodes refer to one element in the simplified DTD (noted DTDS). On the other
436
hand, a leaf denotes either a PCDATA element, an attribute, or an element identified as the root node of a transition tree (the same tree if the DTD contains a recursion). In addition, the arcs of the transition tree can be labeled with the attribute type (ID, IDREF).
Root Determination We determine the root node of a transition tree by one of the following four rules that we have developed in (Hachaichi, Feki, & Ben-Abdallah, 2008-a): R1. Each DTD element that does not appear in the declaration of any other element is a root of a transition tree. In general, each XML document can be seen as one root element that contains all the elements in the document. Thus, rule R1 will extract this topmost element as a root of one transition tree. The application of R1 must however exclude recursive DTD where all elements are sub-elements of other elements (Yan, & ADA, 2001). We treat the case of a recursive DTD in the rule R4. R2. Each element that contains at least one non PCDATA element is a root for a transition tree. Rule R2 excludes transition trees composed of the root connected to only leaf nodes (PCDATA). Such a tree, in the XML document, can be considered as a basic data entity, in the sense that it cannot represent a relationship. In addition, by imposing that a transition tree contains at least one complex element, R2 ensures that the transition tree represents a relationship. R3. Each element contained in the declaration of n elements (n ≥ 2) is the root of a transition tree.
Designing Data Marts from XML and Relational Data Sources
This rule avoids the redundancy of elements in a transition tree and identifies the elements shared by several trees as one transition tree.
IDREF
R4. Each element directly or transitively containing its own declaration is the root of a transition tree.
In the transition tree construction algorithm, the function AnnotateNode(e, #) marks any subelement that is the root of a transition tree with the symbol #. This annotation is borrowed from the concept of foreign keys in relational databases; it is useful to link transition trees of the same DTDS and to construct parameter hierarchies. Note that with this annotation, the constructed transition trees will have at most a depth of four. Such a limited depth accelerates the traversal during the DM schema construction step. On the other hand, the function MarkArcType(E, a) annotates the arc from E to a with the type (ID or IDREF) if the attribute is a unique identifier or an attribute that points to an ID attribute. Figure 5 shows the transition trees constructed by this algorithm for the e-Ticket DTD example.
This rule treats the case of recursive DTD. Informally, if an element refers to itself in its declaration (directly or indirectly), then this element contains a non PCDATA element, and thus, in accordance with rule R2, this element should be the root of a transition tree. For our running example (Figure 2), the above four rules identifies eight roots: e-Ticket,Bookings, Consumer, Room, Buy, Concert, City and State.
Transition Tree Construction For each identified root, this step constructs a corresponding transition tree. This requires a fine scan of the simplified DTDS by applying the algorithm Create_tree (E, DTDS) starting from a root E (Hachaichi, Feki, & Ben-Abdallah, 2008-b): Algorithm Create_tree (E, DTDS) // E is the current node in DTDS { 1.foreach element e in the declaration of Edo { 1.1 AddChildNode(E, e) //add a child e to the node E 1.2 if ( e is determined as a root) then //e identified by rules 1 to 4 1.2.1 AnnotateNode(e, #) 1.3 elseif (e contains other elements or attributes) then 1.3.1 CreateTree(e, DTDS) } 2. for each attribute a in the declaration of Edo { 2.1 AddChildNode(E, a) 2.2 MarkArcType(E, a) //mark arc from E to a by ID or
} {
}
transition tree enrichment To build a relational database schema from transition trees, we need the attribute types (e.g., number, date…). However, such typing information is totally absent in the DTD and XML documents. In fact, a DTD schema declares all data as strings. To assign a data type to the attributes and PCDATA elements in a transition tree, we query a sample set of XML documents valid with respect to the source DTD. For each leaf not marked with # (i.e., each attribute and PCDATA element), we consult the data contained in its corresponding XML tag. Then, we determine a type by scanning its text value and cast it into one of three appropriate types: date, number or string. To assist us with this analysis of the XML documents, there are several semi-structured query languages for XML documents, including XML-QL, Lorel, UnQL, XQL (from Microsoft), and XQuery (from W3C). All these languages have the notion of path
437
Designing Data Marts from XML and Relational Data Sources
Figure 5. Transition trees for the e-Ticket DTD
expressions to navigate the nested structures of XML documents. The application of the enrichment step on the transition trees of our running
example (Figure 5) adds the data types to the leaf nodes not annotated with the symbol # as shown in Figure 6.
Figure 6. Transition trees of Figure 5 enriched with data types
438
Designing Data Marts from XML and Relational Data Sources
relational schema Generation for the xml source In this stage, the typed transition trees are transformed into relational schemas based on the links among the transition trees. This transformation is conducted through the algorithm XML2R which uses the following notation: •
• • • •
• •
Refn: the set of nodes that can be foreign keys in the relation Rn (constructed on node n) Build_R (n): a function that builds on node n a relation called Rn ID (Rn): a function that returns the primary key of a relation Rn Add_attribute (a, Rn): Adds attribute aas a columnto the relation Rn Mark_PK (pk, Rn): Marks the pk list of attributesas the primary key of the relation Rn Mark_FK (a, Rn): Marks the a attributeas a foreign key in the relation Rn ADD_ID (Rn): adds an artificial primary key to the relation Rn. The primary key name is nID: the concatenation of node name n to the string ‘ID’.
Algorithm XML2R( ) 1. NR = the set of transition trees reduced to roots, i.e., without any descendant. 2. NT= theset of nodes all of whose children are leaves not annotated with #.3. For each node n ∈ NRdo { 3.1 Rn = Build_R (n); 3.2 Add_attribute(n, Rn); 3.3 Mark_PK({n}, Rn); } 4. For each node n ∈ NTdo { // n has only leave children not annotated with # 4.1 Rn = Build_R (n);
4.2 ID_found = False; 4.3 For each child a of n do { 4.3.1 Add_attribute(a, Rn); 4.3.2 If (arc(n, a) = ID) then { 4.3.2.1 Mark_ PK({a}, Rn); 4.3.2.2 ID_found = True; } 4.3.4 If (arc (n, a) = IDREF) then 4.3.4.1 Mark_FK(a, Rn) }4.4 If ( ! ID_found) then 4.4.1 ADD_ID(Rn) } // end for line 2 5. For each non terminal node n ∉ NTdo { 5.1 Rn = Build_R (n); 5.2 Refn = ϕ ; 5.3 ID_Found = False; 5.4 For each child a of n do { 5.4.1 If (a is a leaf marked with #) then { 5.4.1.1 Add_attribute (ID(Ra), Rn); 5.4.1.2 Mark_FK (ID(Ra), Rn); 5.4.1.3 Refn= Refn ∪ ID(Ra); } 5.4.2 else { 5.4.2.1 Add_attribute (a, Rn); 5.4.2.2 If (arc(n, a) = ID) then { Mark_ PK({a}, Rn); ID_found = True; } 5.4.3 If (arc (a, n) = IDREF) then Mark_FK({a}, Rn); } } 5.5 If (! ID_found) then
439
Designing Data Marts from XML and Relational Data Sources
5.5.1 then
If COUNT (Refn) ≥ 2 else
}
Mark_PK(Refn, Rn);
ADD_ID (Rn); //end of algorithm
•
This transformation algorithm examines the set of all transition trees and operates according to the following five steps: • • •
•
Step 1 finds the transition trees composed of a single node. Step 2 finds theset of nodes all of whose children are leaves not annotated with #. Step 3 transforms each transition tree reduced to its root r into a single-column table with key r. This transformation avoids the lost of degenerated dimensions and ensures that dimensions can be potentially built on these nodes. Step 4 builds a table Rn for each node n all of whose children are leaves not annotated with #. The columns of Rn are the children of n. If a child a of n a leading arc labeled with ID, then it becomes the primary key of Rn; if no such a child exists, then an artificial primary key is generated via the ADD_ID function. Note that
Figure 7 shows the relational schema generated by applying the XML2R algorithm on the transition trees of the e-Ticket DTD. In this schema, primary keys are underlined and, foreign keys are followed by the sharp sign (#) and the name of the referenced relation.
relational schema Integration At this stage, we have produced one relational schema from the relational database and a second one from the DTD/XML documents. However, these two schemas represent one “virtually single” data source used to load the DM. Thus, they must be integrated to represent conceptually one,
Figure 7. Relational schema derived from e-Ticket DTD
440
the tables issued from this step may be referred to by relations created in step 5. For our example, this step produces the tables RoomTypes, RoomFacilities, County and Singer of the Figure 7. Step 5 deals with tables referencing other tables. For each such table, this step creates a table with a primary key either the node annotated with ID, or the concatenation of its foreign keys when their number exceeds one, otherwise an artificial attribute is added as a primary key.
Designing Data Marts from XML and Relational Data Sources
coherent database. In other words, any semantic heterogeneity must be resolved at this stage. Quite existing works treat semantic heterogeneity in relational databases, cf., (Bright, Hurson, & Pakzad, 1994), (Sheth, & Larson, 1990), (Ceri, Widom, 1993) , (Hull,1997) (Zhang, & Yang, 2008) and XML document storage as relational databases, cf., (Ceri, Fraternali, & Paraboschi, 2000), (Deutsch, Fernandez, Suciu, 1999), (Lee, & Chu, 2000), (Schmidt, Kersten, Windhouwer, & Waas, 2000), (Kappel, Kapsammer, & Retschitzegger, 2001). In addition, selected schema heterogeneity issues were treated in the context of XML and relational database integration, cf., (Kappel, Kapsammer, Retschitzegger, 2000). These works, used in the context of database model transformation, can be also used to resolve the semantic heterogeneity within our design method. Overall, the works on relational DB schema integration proceed in three steps: Pre-integration where the different source models are transformed
into the same model, schema correspondence that resolves naming conflicts; and schema fusion that produces a global schema by replacing equivalent schemas, regrouping intersecting schemas and collecting independent schemas. In our method, the pre-integration step is treated through our pretreatment step, which leaves us with the schema correspondence and fusion steps. To illustrate the integration step, let us revisit our running example (Figures 3 and 7) which requires only the fusion step. The integrated schema of Figure 8 is produced by applying the following fusion operations: •
• •
Import the relations Payments, PaymentMethods and RoomBands from hotel room booking relational schema. Import the relations SingerBuy and Concert from the e-Ticket relational schema. Import the relations Room, RoomTypes, RoomBands, RoomFacilities, Bookings,
Figure 8. Integrated relational schema issued from e-Ticket and hotel room booking
441
Designing Data Marts from XML and Relational Data Sources
Customer, County, State and City from eTicket and hotel room booking schemas and integrate them using the union operator.
relAtIon clAssIFIcAtIon In current data-driven DW/DM development methods, entities and relationships are the keystone of the conceptual design. More precisely, dimensions are often built from entities and date dimensions from temporal attributes; whereas facts are mainly built from n-ary relationships linking entities and, rarely on entities. However, this distinction is not explicitly present in the relational model where one single concept is used to model both entities and relationships, namely the relational table (or table for short). Hence, in order to define a design process that correctly identifies facts and dimensions from a relational schema (i.e., a set of relational tables) we must first determine the conceptual class of each table. To do so, we perform a reverse engineering task by precisely examining the structure of the tables in the sources. In fact, this leads to a scan of the set of attributes composing the primary and foreign keys of the source tables. We have presented in our previous work (Feki, & Hachaichi, 2007-a) how to partition a set of tables, issued from an operational information system, into two subsets: A subset of tables modeling relationships and another subset modeling entities. Briefly, a table representing a relationship is characterized by its primary key being composed of one or several foreign keys, whereas a table representing an entity generally has its primary key not containing foreign keys. This classification should well form the two subsets of tables by satisfying three basic properties: Disjointness, Completeness and Correctness. The first property imposes that the two subsets share no common table; this ensures that each table is uniquely classified. The completeness property ensures that every table has a class. The correctness property recommends that each table is correctly
442
identified, i.e., as a relationship if it originally models a relationship and as an entity if it models an entity. Correctness is less trivial than the two first properties and it is not satisfied in the two following situations: 1) when the primary key of a relationship is not the concatenation of all its foreign keys, that is, this primary key can be an artificial attribute such as a sequential number; or 2) when the primary key of a relationship is the concatenation of attributes coming from empty entities. Such attributes are never foreign keys since an empty entity (i.e., entity reduced to its key) never transforms into a relation. For more details about empty entities, the reader is referred to (Feki, & Hachaichi, 2007-b) where an illustrative example is given. Table 1 shows the classification of the relations presented in Figure 8.
dm scHemA constructIon This third design step builds DM schemas modeled as stars through the application of a set of extraction rules defined for each multidimenTable 1. Source relations classified into entities and relationships Relational Tables
Conceptual Class
Room
Entity
RoomTypes
Entity
RoomBands
Entity
RoomFacilities
Entity
Payments
Entity
PaymentMethods
Entity
Bookings
Relationship
Customer
Entity
County
Entity
State
Entity
City
Entity
Singer
Entity
Concert
Entity
Buy
Relationship
Designing Data Marts from XML and Relational Data Sources
sional concept. Our rules have the merit to be independent of the semantics of the data source. In addition, they keep track of the origin (table name, column name, data type and length…) of each multidimensional concept in the generated DM schemas. This traceability is fundamental as we intend to help automating the generation of ETL (Extract Transform and Load) procedures to load the designed DM.
Fact Identification To identify facts, we exploit our previous table classification and build a set of facts of firstrelevance level from tables representing relationships and then, a set of facts of second-relevance level issued from tables representing entities. This distinction in analysis relevance levels is very useful; it assists the DW designer during the selection from several generated facts those facts that have a higher analysis potentiality. In fact, in all DW design approaches it has been unanimously accepted that a business activity (e.g., Sales, Bill) is generally modeled as a relationship. This observation incites to limit the construction of facts mainly on relationships (Golfarelli, Maio, & Rizz, 1998), (Cabibbo, L., & Torlone, R. 1998), (Feki, Nabli, Ben-Abdallah, & Gargouri, 2008) and rarely on entities (Moody , & Kortnik, 2000), (Phipps, & Davis, 2002), (Feki, Nabli, Ben-Abdallah, & Gargouri, 2008). On the other hand, in practice, not all relationships are useful to build facts; so we limit the set of facts at the first-relevance level to those containing a non key numeric attribute. For the integrated schema of our running example e-Ticket and hotel room booking, we obtain the facts depicted in Table 2 where a fact built on a table T is conventionally named F-T. To complete this fact identification step, we consider that each table representing an entity and containing a numeric, non key attribute is a fact at the second-relevance level (cf., Table 2). Note that the numeric non key attribute condition excludes
facts without measures, which are considered as infrequent facts.
measure Identification A fact contains a finite set of measures. In most cases, measures serve to compute summarized results (e.g., total amount of sales by month and year, by mart…) using aggregate functions; hence, measures have numeric values. Therefore, we extract measures mainly from fact-tables (i.e., table on which the fact is built). Furthermore, rarely few additional measures could be extracted not from a fact-table itself but from tables parallel to it. Below, we informally explain how we extract measures in each of these cases.
Measure Identified from a Fact-Table To construct a significant set of candidate measures for a fact F-T built on a table T, we exclude keyattributes from the set of numeric attributes of T because keys are generally artificial and redundant data, and they do not trace/record the enterprise business activity. Moreover, we have shown in (Feki, & Hachaichi, 2007-b) that we must exclude from T its “non key attributes belonging to other tables” because these attributes really represent keys issued from empty entities. Table 2 shows all measures extracted from each extracted fact-table.
Table 2. Facts and measures for the integrated schema issued from hotel room booking and eTicket of Figure 8 Fact
Relevance level
Measure
F-Room
Second
Price
F-Payments
Second
PaymentAmount
F-Bookings
First
TotalPayementDueAmount TotalPayementDueDate
F-Buy
First
TotalPayementB
443
Designing Data Marts from XML and Relational Data Sources
Measure Identified from Parallel Tables As mentioned above, a second origin of measures is parallel tables. We adapted the definition of parallel tables from the concept of parallel relationships which is specific to the E/R model (Seba, 2003). In an E/R diagram, a relationship R1 connected to m entities is said to be parallel to a relationship R2 connected to n entities (m≤n) if the m entities linked to R1 are also linked to R2. By analogy, we define the concept of parallel tables as follows: Let T1 and T2 be two relationship tables such as T1 and T2 are connected to m and n (m≤ n) tables respectively; T1 is said to be parallel to T2 (noted T1//T2) if and only if the primary key of T1 is included in or equal to the primary key of T2 (Feki & Hachaichi, 2007-c). Note that, in this definition, T1 and T2 are assumed to be relationships because entities could not be parallel; this optimizes the search of parallel tables (Feki, & Hachaichi, 2007-c). In our running example, there are no parallel tables. Let T1 and T2 be already identified as two facttables and let T1 be parallel to T2. The fact F-T1 (built on T1) can receive other measures coming from the fact F-T2 (built on T2) by aggregating measures of F-T2 before moving them to F-T1. Since the measures in F-T2 are more detailed than those in F-T1 (i.e., F-T2 has n-m>0 dimensions more than F-T1), then to move them to F-T1, they must be aggregated on the set of their uncommon dimensions. Note that if the dimension set of F-T1 is equal to the dimension set of F-T2, then the set of uncommon dimensions between F-T1 and F-T2 is empty; therefore, T1 is parallel to T2 and reciprocally. Consequently, the measures of both facts F-T1 and F-T2 have the same granularity level and could be seen as two halves of the same fact; hence, we recommend merging them into a single fact conventionally called F-R1-R2. In our design method,aggregated measures as well as dimensions used in their calculation are automatically identified. However, the designer
444
must intervene to define the necessary aggregation functions which are semantics dependent.
dimension Identification A dimension is generally made up of a finite set of attributes that define various levels of details (hierarchies), whereas others are less significant but used, for instance, to label results or to restrict data processed in queries. These latter are called weak (or non dimensional) attributes. The set of candidate dimensions for a given fact can be built either on tables modeling entities or attributes. Given a fact F-T (i.e., fact build on table T), we consider every table T1 that represents an entity and that is directly referred by the table T as a candidate dimension for F-T. Conventionally, the name of this dimension is D-T1 and its identifier is the primary key of T1. In addition to dimensions built on tables, we can build a dimension both on an attribute of a special data type (Boolean, temporal) as well as on an attribute issued from an empty entity. Such a dimension is known in data warehousing as a degenerated dimension. For instance, a Boolean column splits data-rows of its table into two subsets; thus, such an attributecan be an axis of analysis. In practice, a degenerated dimension is integrated inside the fact. A Boolean column b pertinent to a fact-table T produces for T a candidate, degenerated dimension named D-b and identified by b. For instance, a Gender column in a Client database table can build a dimension D-Gender. Furthermore, the data warehouse community assumes a DW as achronological collection of data (Kimball, Reeves, Ross, & Thornthwaite, 1998). Consequently, the time dimensionappears in all data warehouses. For this reason, we propose to build dimensionson temporalattributes as follows: A temporal attribute (date or time) belonging to a fact-table T timestamps the occurrences of the fact built on T; it generates a candidate dimension for T where it is the identifier. For the relational
Designing Data Marts from XML and Relational Data Sources
Table 3. Dimensions for the extracted facts Fact
Dimension
F-Room
F-Payments
F-Bookings
F-Buy
Identifier
D-RoomTypes
RoomTypeID
D-RoomBands
RoomBandID
D-Facilities
RoomFaciliyID
D-Customer
CustomerID
D-PaymentMethods
PaymentMethodID
D-Customer
CustomerID
D-DateBookingMade
DateBookingMade
D-TimebookingMade
TimebookingMade
D-Room
RoomID
D-BookedStartDate
BookedStartDate
D-BookedEndDate
BookedEndDate
D-Customer
CustomerID
D-Concert
CustomerID
D-BuyDate
BuyDate
D-ConcertDate
ConcertDate
schema issued from e-Ticket and Hotel room booking, the above rules produce the dimensions shown in Table 3.
Hierarchy Identification The attributes of a dimension are organized into one or several hierarchies. These attributes are ordered from the finest towards the coarsest granularity. In order to distinguish them, we call these attributes parameters. In addition, any hierarchy of a dimension d has the identifier of d as its finest parameter (i.e., at the level one) already extracted with d. The remaining parameters (i.e., those of level higher than one) forming candidate hierarchies for d will be extracted in two steps. First, we extract the parameters located immediately after the dimension identifier; each obtained parameter constitutes a hierarchy. Secondly, for each one, we extract its successor parameters. Thus, a parameter of level two is either: a)
the primary key of a table of class Entity directly referred by a dimension-table d;
Table 4. Parameters for the dimensions of Table 3 Hierarchy parameters
Dimension D-Customer
(From finest to coarsest) CityID
StateID
CountyID
CustomerDOB RoomTypeID
D-Room
RoomBandID RoomFaciliyID
D-Concert
b) c)
SingerID CityID
StateID
CountyID
a Boolean or temporal attribute belonging to a dimension table d; or a non (primary or foreign)-key attribute belonging to a dimension-table d and to other tables.
The recursive application of the above two steps on the tables obtained in step a) produces parameters of level higher than two. Table 4 presents the hierarchy parameters of each dimension in Table 3. A parameter may functionally determine some attributes within its origin table; these attributes describe the parameter and, therefore, are called descriptive (also non-dimensional or weak) attributes. Descriptive attributes for a parameter p are non-key textual or numerical attributes belonging to a table supplying p and not belonging to other tables. Among these attributes, those textual are more significant than numerical ones (Feki, & Hachaichi, 2007-a). Table 5 presents for each parameter of Table 4 its associated descriptive attributes.
cAse toolset To support our design method, we have implemented the CAME (“Conception Assistée de Magasins et d’Entrepôts de données”) case toolset.
445
Designing Data Marts from XML and Relational Data Sources
Table 5. Descriptive attributes for the parameters of Table 4 Hierarchy parameters
Descriptive attributes
RoomTypeID
TypeDesc
RoomBandID
BandDesc
RoomFaciliyID
FacilityDesc
RoomID
Price, Floor, AdditionalNotes
PaymentMethodID
PaymentMethod
CountyID
CountyName
StateID
StateName
CityID
CityName CustomerForenames CustomerSurnames
CustomerID
CustomerHomePhone CustomerWorkPhone CustomerMobilePhone CustomerEmail
SingerID ConcertID
SingerForenames SingerSurnames ConcertName
CAME carries out the design of conceptual DM schemes starting from either the relational database schemas of an operational database, or from a set of XML documents compliant to a given DTD. Its main functions cover our DM design method steps: 1) Acquisition and pretreatment of a DTD using a DTD parser and the XQuery language to extract typing information; 2) Conversion of XML structure to relational scheme; 3) Acquisition of the relational schema; 4) Schema integration by applying the fusion approach presented in (Hull, 1997); 5) Conceptual class identification; 6) DM Conceptual design whose result can be seen both in a tabular format (as illustrated in Figure 9 through the running example) and in a graphical representation (Figure 10); and 7) Adaptation of the obtained DM when the designer adjusts the constructed DM schemas to the analytical requirements of decisional users. In this step, CAME is linked to our case tool MPI-EDITOR (Ben Abdallah, Feki, & Ben-Abdallah, 2006) which allows the designer to graphically manipulate the built DM
Figure 9. Candidate DM schema for the integrated schema
446
Designing Data Marts from XML and Relational Data Sources
Figure 10. GUI for DM schema adaptation
schemas and adapt them to produce well-formed schemas (Figure 10).
conclusIon In this chapter, we have presented a bottom-up/ data-driven design method for DM schemas from two types of sources: a relational database and XML documents compliant to a given DTD. Our method operates in three automatic steps (Data source pretreatment, relation classification, and DM schema construction) followed by a manual step for DM schema adaptation. It exploits the recent schema/DTD version of a data source to automatically apply a set of rules that extract all candidate facts with their measures, dimensions and hierarchies. In addition, being automatic, our design method is supported by a CASE toolset that allowed us to evaluate it through several examples. In this paper, we illustrated the method through an e-Ticket DTD used by an online broker and a relational database modeling a hotel room booking system. Our design method assists decision makers in defining their analytical needs by proposing all
analytical subjects that could be automatically extracted from their data sources. In addition, it guarantees that the extracted subjects are loadable fromthe enterprise information system and/ or the XML source documents. Furthermore, it keeps track of the origin of each component of the generated DM schema; this traceability information is vital for the definition of ETL procedures. We are currently extending our design method to deal with other types of data sources and, in particular, XML documents with XML schemas and object databases. Furthermore, we are examining how to integrate adjusted/validated DM schemas to design a DW schema loadable from all the considered sources.
reFerences Ben Abdallah, M., Feki, J., & Ben-Abdallah, H. (2006). MPI-EDITOR : Un outil de spécification de besoins OLAP par réutilisation logique de patrons multidimensionnels. In Proceedings of the Atelier des Systèmes Décisionnels (ASD’06), Agadir, Morocco.
447
Designing Data Marts from XML and Relational Data Sources
Böhnlein, M., & Ulbrich-vom Ende, A. (1999). Deriving initial data warehouse structures from the conceptual data models of the underlying operational information systems. In Proceedings of the 2nd ACM international workshop on Data warehousing and OLAP, Kansas City, Missouri (pp. 15-21). Bonifati, A., Cattaneo, F., Ceri, S., Fuggetta, A., & Paraboschi, S. (2001). Designing data marts for data warehouse. ACM Transactions on Software Engineering and Methodology, 10, 452–483. doi:10.1145/384189.384190 Boufares, F., & Hamdoun, S. (2005). Integration techniques to build a data warehouse using heterogeneous data sources. Journal of Computer Science, 48-55. Bright, M. W., Hurson, A. R., & Pakzad, S. (1994). Automated resolution of semantic heterogeneity in multidatabases. [TODS]. ACM Transactions on Database Systems, 19(2), 212–253. doi:10.1145/176567.176569 Bruckner, R., List, B., & Schiefer, J. (2001). Developing requirements for data warehouse systems with use cases. In Proceedings of the 7th Americas Conf. on Information Systems (pp. 329-335). Cabibbo, L., & Torlone, R. (1998). A logical approach to multidimensional databases. In Proceedings of the Conference on Extended Database Technology, Valencia, Spain (pp. 187-197). Ceri, S., Fraternali, P., & Paraboschi, S. (2000). XML: Current developments and future challenges for the database community. In Proceedings of the 7th Int. Conf. on Extending Database Technology (EDBT), (LNCS 1777). Berlin, Germany: Springer. Ceri, S., & Widom, J. (1993). Managing semantic heterogeneity with production rules and persistent queues source. In Proceedings of the 19th International Conference on Very Large Data Bases (pp. 108-119).
448
Codd, E. F. (1970). A relational model of data for large data banks. ACM Communications, 13(6), 377–387. doi:10.1145/362384.362685 Databasedev.co.uk. (2008). Sample data models for relational database design. Retrieved July 30, 2008, from http://www.databasedev.co.uk/ data_models.html Deutsch, A., Fernandez, M., & Suciu, D. (1999). Storing semi structured data in relations. In Proceedings of the Workshop on Query Processing for Semi structured Data and Non-Standard Data Formats. Feki, J., & Ben-Abdallah, H. (2007). Multidimensional pattern construction and logical reuse for the design of data marts. International Review on Computers and Software, 2(2), 124–134. Feki, J., & Hachaichi, Y. (2007). Du relationnel au multidimensionnel: Conception de magasins de données. In Proceedings of the Revue des Nouvelles Technologies de l’Information: Entrepôts de données et analyse en ligne (EDA 2007) (Vol. B-3, pp.5-19). Feki, J., & Hachaichi, Y. (2007). Conception assistée de MD: Une démarche et un outil. Journal of Decision Systems, 16(3), 303–333. doi:10.3166/ jds.16.303-333 Feki, J., & Hachaichi, Y. (2007). Constellation discovery from OLTP parallel-relations. In Proceedings of the 8th International Arab Conference on Information Technology ACIT 07, Lattakia, Syria. Feki, J., Nabli, A., Ben-Abdallah, H., & Gargouri, F. (2008). An automatic data warehouse conceptual design approach. In Encyclopedia of data warehousing and mining (2nd ed.). Hershey, PA: IGI Global. Giorgini, P., Rizzi, S., & Maddalena, G. (2005). Goal-oriented requirement analysis for data warehouse design. In Proceedings of the ACM Eighth International Workshop on Data Warehousing and OLAP, Bremen, Germany (pp 47-56).
Designing Data Marts from XML and Relational Data Sources
Golfarelli, M., Maio, D., & Rizzi, S. (1998). Conceptual design of data warehouses from E/R schemas. In Proceedings of the Conference on System Sciences, Kona, Hawaii. Washington, DC, USA: IEEE Computer Society. Golfarelli, M., Rizzi, S., & Vrdoljak, B. (2001). Data warehouse design from XML sources. In Proceedings of the Fourth ACM International Workshop on Data Warehousing and OLAP Atlanta, GA, USA (pp. 40-47). Hachaichi, Y., & Feki, J. (2007). Patron multidimensionnel et MDA pour les entrepôts de données. In Proceedings of the 2nd Workshop on Decisional Systems, Sousse-Tunisia.
Kappel, G., Kapsammer, E., Rausch-Schott, S., & Retschitzegger, W. (2000). X-ray - towards integrating XML and relational database systems. In Proceedings of the 19th Int. Conf. on Conceptual Modeling (ER), Salt Lake City, USA (LNCS 1920). Berlin, Germany: Springer. Kappel, G., Kapsammer, E., & Retschitzegger, W. (2001). XML and relational database systems - a comparison of concepts. In Proceedings of the International Conference on Internet Computing (1) (pp. 199-205). Kimball, R., Reeves, L., Ross, M., & Thornthwaite, W. (1998). The data warehouse lifecycle toolkit. New York: John Wiley & Sons.
Hachaichi, Y., Feki, J., & Ben-Abdallah, H. (2008). XML source preparation for building data warehouses. In Proceedings of the International Conference on Enterprise Information Systems and Web Technologies EISWT-08, Orlando, Florida, USA (pp. 61-67).
Lee, D., & Chu, W. (2000). Constraints-preserving transformation from XML document type definition to relational schema. In Proceedings of the 19th Int. Conf. on Conceptual Modeling (ER), Salt Lake City, USA (LNCS 1920). Berlin, Germany: Springer.
Hachaichi, Y., Feki, J., & Ben-Abdallah, H. (2008). Du XML au multidimensionnel: Conception de magasins de données. In Proceedings of the 4èmes journées francophones sur les Entrepôts de Données et l’Analyse en ligne (EDA 2008), Toulouse, RNTI, Toulouse, France (Vol. B-4. pp. 45-59).
List, B., Bruckner, R. M., Machacze, K., & Schiefer, J. (2002). A comparison of data warehouse development methodologies case study of the process warehouse. In Proceedings of the International Conference on Database and Expert Systems Applications DEXA
Hull, R. (1997). Managing semantic heterogeneity in databases: A theoretical prospective. In Proceedings of the sixteenth ACM SIGACTSIGMOD-SIGART symposium on Principles of database systems, Tucson, AZ, USA (pp. 51-61).
Mazón, J.-N., & Trujillo, J. (2008). An MDA approach for the development of data warehouses. Decision Support Systems, 45(1), 41–58. doi:10.1016/j.dss.2006.12.003
Inmon, W. H. (1996). Building the data warehouse. New York: John Wiley & Sons. Jensen, M., Møller, T., & Pedersen, T. B. (2001). Specifying OLAP cubes on XML data. Journal of Intelligent Information Systems.
Moody, D., & Kortnik, M. (2000). From enterprise models to dimensional models: A methodology for data warehouse and data mart design. In Proceedings of the DMDW’00, Suede. Ouaret, Z., Bellatreche, L., & Boussaid, O. (2007). XUML star: Conception d’un entrepôt de données XML. In Proceedings of the Atelier des Systèmes d’Information Décisionnels, Sousse, Tunisie (pp. 19-20).
449
Designing Data Marts from XML and Relational Data Sources
Paim, F. R. S., & Castro, J. B. (2003). DWARF: An approach for requirements definition and management of data warehouse systems. In Proceedings of the Int. Conf. on Requirements Engineering, Monterey Bay, CA.
Schmidt, A. R., Kersten, M. L., Windhouwer, M. A., & Waas, F. (2000). Efficient relational storage and retrieval of XML documents. In Proceedings of the Workshop on the Web and Databases (WebDB), Dallas, USA.
Phipps, C., & Davis, K. (2002). Automating data warehouse conceptual schema design and evaluation. In Proceedings of the 4th Int. Workshop on Design and Management of Data Warehouses (Vol. 58, pp. 23-32).
Schneider, M. (2003). Well-formed data warehouse structures. In Proceedings of the 5th International Workshop at VLDB’03 on Design and Management of Data Warehouses (DMDW’2003), Berlin, Germany.
Prakash, N., & Gosain, A. (2003). Requirements driven data warehouse development. In Proceedings of the 15th Conference on Advanced Information Systems Engineering Short Paper Proc., Velden, Austria.
Seba, D. (2003). Merise - concepts et mise en œuvre. France: Eni.
Prat, N., Akoka, J., & Comyn-Wattiau, I. (2006). A UML-based data warehouse design method. Decision Support Systems, 42, 1449–1473. doi:10.1016/j.dss.2005.12.001 Rusu, L. I., Rahayu, W., & Taniar, D. (2004). On data cleaning in building XML data warehouses. In Proceedings of the 6th Intl. Conference on Information Integration and Web-based Applications & Services (iiWAS2004), Jakarta, Indonesia (pp. 797-807). Rusu, L. I., Rahayu, W., & Taniar, D. (2005). A methodology for building XML data warehouses. International Journal of Data Warehousing and Mining, 1(2), 67–92. Sahuguet, A. (2000). Everything you ever wanted to know about DTDs, but were afraid to ask. In Proceedings of the International Workshop on the Web and Databases WebDB 2000 (pp. 171-183). Salem, A., Ghozzi, F., & Ben-Abdallah, H. (2008). Multi-dimensional modeling - formal specification and verification of the hierarchy concept. In . Proceedings of the ICEIS, 2008(1), 317–322.
450
Shanmugasundarma, J., Tufte, K., He, G., Zhang, C., DeWitt, D., & Naughton, J. (1999). Relational database for querying XML documents: Limitation and opportunities. Proceedings of the 25th VLDB Conferences, Scotland. Sheth, A. P., & Larson, J. A. (1990). Federated database systems for managing distributed, heterogeneous, and autonoumous databases. ACM Computing Surveys, 22(3), 183–236. doi:10.1145/96602.96604 Vrdoljak, B., Banek, M., & Rizzi, S. (2003). Designing Web warehouses from XML schema. In Proceedings of the 5th International Conference Data Warehousing and Knowledge Discovery: DaWak, Prague Czech. Widom, J. (1999). Data management for XML - research directions. IEEE Data Engineering Bulletin, Special Issue on XML, 22(3). Wikipedia Encyclopedia. (2008). Database. Retrieved August 1, 2008, from http://en.wikipedia. org/wiki/Database World Wide Web Consortium XML Schema. (2008). W3C candidate recommendation. Retrieved August 1, 2008, from http://www.w3.org/ XML/Schema.html
Designing Data Marts from XML and Relational Data Sources
Yan, M. H., & Ada, W. C. F. (2001). From XML to relational database. In Proceedings of the CEUR Workshop. Zhang, L., & Yang, X. (2008). An approach to semantic annotation for metadata in relational databases. In Proceedings of the International Symposiums on Information Processing (ISIP) (pp. 635-639).
key terms And deFInItIons Data Mart: Data marts are analytical data stores designed to focus on specific business functions for a specific community within an organization. A data mart is designed according to a specific model, namely the multidimensional model that highlights the axes of data analyses. Data marts are often derived from subsets of a data warehouse data, though in the bottom-up data warehouse design methodology the data warehouse is created from the union of data marts. Document Type Definition (DTD): A DTD defines the tags and attributes that can be used in an XML document. It indicates which tags can appear within other tags. XML documents are described using a subset of DTD which imposes a number of restrictions on the document’s structure, as required per the XML standard. DTD Simplification: The simplification of a DTD removes empty elements, replaces each reference to an ENTITY type with the text cor-
responding to that entity, and then it removes the corresponding ENTITY declaration, and apply Flattening, Reduction and Grouping transformations. eXtensible Markup Language (XML): XML is a general-purpose specification for creating custom markup languages. It is classified as an extensible language because it allows the user to define the mark-up elements. The main purpose of XML is to aid information systems in sharing data, especially via the Internet. Model Integration: Model integration produces a single model that combines two or more input models. The produced model can be represented in the same definitional formalism as the input models (or in one of the definitional formalisms used by the heterogeneous input models). The expression of the new model must be formally correct within the definitional formalism used. Relational Data Model: The relational data model was introduced by E.F. Codd in 1970. It is particularly well suited for modeling business data. In this model, data are organized in tables. The set of names of the columns is called the “schema” of the table. The relational model is the model in most common use today. Star Schema: The star schema (sometimes referenced as star join schema) is the simplest model of multidimensional schema. The star schema consists of a few facts (possibly just one) referencing any number of dimensions.
This work was previously published in Data Warehousing Design and Advanced Engineering Applications: Methods for Complex Construction, edited by Ladjel Bellatreche, pp. 55-80, copyright 2010 by Information Science Reference (an imprint of IGI Global).
451
452
Chapter 2.8
Migrating Legacy Information Systems to Web Services Architecture Shing-Han Li Tatung University, Taiwan Shi-Ming Huang National Chung Cheng University, Taiwan David C. Yen Miami University, USA Cheng-Chun Chang National Chung Cheng University, Taiwan
AbstrAct The lifecycle of information system (IS) became relatively shorter compared with earlier days as a result of information technology (IT) revolution and advancement. It is tremendous difficult for an old architecture to catch up with the dynamic changes occurred in the market. To match with the fast pace of challenges, enterprises have to use the technology/concept of information system reengineering (ISR) to preserve the value of their legacy systems. Consequently, web servicesbased systems with Service-Oriented Architecture (SOA) are widely accepted as one of the possible solutions for an enterprise information system to retain/keep its old legacy systems. Using this DOI: 10.4018/978-1-60566-172-8.ch015
aforementioned architecture, enterprise information systems tend to be more flexible and agile to fit into the capricious business environment, and thus, be easier to integrate with additional applications. In other words, it is indeed an essential requirement for an enterprise to establish such a system to further improve corporation’s productivity and operational efficiency. In specific, the requirement is simply to migrate the legacy systems to be SOA architecture. However, it is a trade-off between the value of legacy systems and the compatibility with SOA to decide whether this alternative is a feasible one. The purpose of this manuscript is to propose a migrating solution to convert the architecture of the legacy system into SOA with a systematic approach. This proposed methodology is different from the traditional object-oriented approaches, which migrates the
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Migrating Legacy Information Systems to Web Services Architecture
system to have a services-oriented focus without incorporating general objected-oriented (OO) or functional oriented feature. In this study, a case study and information capacity theory were employed to verify/validate that this approach is indeed an effective and a practicable one.
IntrodutIon Due to the dynamic advancement of information technology (IT), the life cycle of the information system (IS) is greatly reduced to a certain extent. Generally speaking, the traditional legacy information systems possess such undesirable characteristics as latency of information, poor reach, inflexibility, and higher cost of maintenance. Furthermore, the traditional system architectures such as centralized and client/server are frequently incompatible with the requirements and specifications which exist in today’s business environment. To be more specific, the legacy information systems have these aforementioned shortcomings, which have prevented the businesses and/or organizations to react/respond dynamically to the rapid challenges as they should. Consequently, enterprises have a strong need to utilize the technology of information system reengineering (ISR) to preserve the value of their legacy systems. In this situation, enterprises or software companies are always in a dilemma of redeveloping/ redesigning their legacy systems to include the newer Web services components (Bouguettaya, Malik, Rezgui, & Korff, 2006; Chen, Zhou, & Zhang, 2006; Kim, Sengupta, Fox, & Dalkilic, 2007). Discarding and redeveloping the existing systems not only wastes the money allocated for software investments, but also causes organizations to lose competitive advantages to meet numerous unanticipated contingencies and/or uncertainties. Based on prior study (Ommering, 2005), the system migration will be one of the best ways to reengineer a legacy system. Traditionally, there
are two approaches available to migrate the legacy system to the Web services architecture (Vanston, 2005). The first approach is the legacy externalization approach. This approach is usually the main alternative available on the current market. It generally uses strategic or pointed forms, along with new types of interface display, to develop the integrated products (such as “Web Scraping”). The other approach is the component encapsulation approach. This is another viable alternative to utilize the component standard technology like Common Object Request Broker Architecture (CORBA) (OMG, 1995; Vinoski, 1997), Component Object Model (COM) (Microsoft, 2007), or Enterprise Java Beans (EJB) (Sun, 2007) to encapsulate the legacy system into the components, and then translate them into a Web Services standard. Ultimately, this second approach is migrated to the component-based and transaction-oriented framework (such as IBM WebSphere and BEA WebLogic) (Liu, Fekete, & Gorton, 2005; Waguespack & Schiano, 2004). Both of the aforementioned approaches may not be a bad way for the legacy system to migrate into the equivalent Web services standards. However, they normally utilize the hard-cording technique to implement the interface with the corresponding standard (Brereton & Budgen, 2000; Kwan & Li, 1999; McArthur, Saiedian, & Zand, 2002). Being a traditional structure program, the system normally has a shorter life cycle and lacks scalability, feasibility, and reusability. Further, it would be much more difficult to maintain in the future. On the other hand, if a company is applying the component encapsulation approach without incorporating appropriate component migrating methods, the system still has these aforementioned shortcomings (Rahayu, Chang, Dillon, & Taniar, 2000). Unfortunately, most alternatives adopted now by enterprises and/or businesses do not use the proper component migrating method. Many related studies (Erickson, Lyytinen, & Siau, 2005; Fong, Karlapalem, Li, & Kwan, 1999; Gall, Klosch, & Mittermeir, 1995; Kwan
453
Migrating Legacy Information Systems to Web Services Architecture
& Li, 1999; Sang, Follen, Kim, & Lopez, 2002) have presented methods that can be utilized to systematically reengineer the legacy system into the Object-Oriented (OO) or the distributed system. However, the Web Services architecture by nature is different from a general distributed system. The core concept of Web Service is a Service-Oriented Architecture (SOA) (Huang, Hung, Yen, Li, & Wu, 2006; Stal, 2002). In the SOA environment, resources in a network are made available as an independent service that can be accessed without any knowledge of the underlying platform implementation (Erl, 2005). Web services can certainly rely on a Webservices composition language (WSCL) such as the Business Process Execution Language for Web Services (BPEL4WS) (IBM, BEA Systems, Microsoft, SAP AG, & Siebel Systems, 2002) to transform an existing Web service into a new type of Web service by employing well-defined process modeling constructs (Chen, Hsu, & Mehta, 2003; Curbera, Khalef, Mukhi, Tai & Weerawarana, 2003), enterprise architects believe that SOA can help businesses respond more quickly and costeffectively to fast-changing market conditions (Sutor, 2006). This style of architecture, in fact, promotes reusability at the macro level (service) rather than the micro levels (objects). By doing so, it can greatly simplify interconnection to and usage of existing IT (legacy) assets (Carroll & Calvo, 2004). The purpose of this study is to propose a methodology which utilizes the existing data design of an information system to migrate the legacy system to an SOA system. The benefits include the following items: First, this approach has the advantage of reengineering the legacy system to various system components from a technical aspect, and use the Web services composition language (WSCL) to translate the existing model into the new Web services architecture. Unlike traditional object-oriented approaches, this proposed methodology migrates the old systems to
454
services-oriented or functional-oriented ones. Additionally, this approach can be employed to develop a system which will be more flexible and adaptable to fit better to the constantly-changing business environment. Furthermore, this proposed approach will no doubt make the conversion process easier to integrate with other additional applications. The remainder of this article is organized as follows: The second section provides a brief overview of some legacy systems’ reengineering and Web services approaches. The proposed legacy system’s migrating approach and the implementation of a prototype will be discussed in the third section. The fourth section contains the system implementation using simulation and a real case study. A comparison of the proposed approach with others is provided in the fifth section. Finally, the sixth section concludes this manuscript.
lIterAture revIeW Information Flow for business Process (IFbP) Business Process Management (BPM) is one of the basic elements of Web services architecture (Basu & Kumar, 2002). It can be decomposed of two major phases—process design and diagnosis. The process diagnosis phase will discover an entire picture of the business process for an enterprise, also known as the AS-IS Model. The exploration of AS-IS Model is a very time-consuming and experience-oriented task. As a result, an enterprise has to spend a lot of time and pay huge labor costs during the process diagnosis phase. Besides, it is difficult for the process designers to transform one process model to another equivalent one. There are certain gaps among different processdesigning methods. Thus, the study of Shi-Ming Huang and Fu-Yun Yu (2004) investigates a novel methodology for business process discov-
Migrating Legacy Information Systems to Web Services Architecture
ery based on information flow, called IFBP. This IFBP methodology includes the following three phases: transformation, integration, and conversion phases. The input of IFBP is actually dataflow diagrams (DFD) for the existing information systems. The output is an Event-Driven Process Chain (EPC) diagram for enterprise process flow.
Information system reengineering There are several direct and indirect system migrating approaches.
Direct Migration Sang et al. (2002) presented an approach to integrate legacy applications written in FORTRAN into a distributed object-based framework. FORTRAN codes are modified as little as possible when being decomposed into modules and wrapped as objects. They used object-oriented technique such as C++ objects to encapsulate individual engine components as well as the CORBA, and implement a wrapper generator, which takes the FORTRAN applications as input and generates the C++ wrapper files and interface definition language file. Serrano, Montes, and Carver (1999) presented a semiautomatic, evolutionary migration methodology that produces an object-based distributed system for legacy systems. They first used ISA (Identification of Subsystems based on Associations), a design recovery and subsystem classification technique to produce a data-cohesive hierarchical subsystem decomposition of the subject system. Second, they adapted the subsystems to develop the object-oriented paradigm. Third, they wrapped up and defined interfaces of the subsystems in order to define components. Finally, middleware technologies for distributed systems were used to implement the communication between components.
Indirect Migration Gall et al. (1995) proposed an approach, which re-architect the old procedural software to an object-oriented architecture. The transformation process was developed to identify potential objects in the procedural source codes in order to enable the utilization of object-oriented concepts for future, related software maintenance activities. This approach was not directly performed on the sourcecode level, but instead, different representations were developed out of the procedural program on higher levels of abstraction (e.g., structure charts, data flow diagrams, entity-relationship diagrams, and application models), which represent different kinds of program information. Additional application-domain knowledge was introduced by human experts to support the program transformation to enable several higher-level decisions during the development process. Kwan and Li (1999) proposed a methodology to reengineer those previously-developed relational database systems to OO database systems. Their approach is based on the input of: (1) Extended Entity Relationship model (EER) that provides rules for structuring data; (2) Data Dictionary that provides static data semantics; and (3) DFD that provides dynamic data semantics. This approach captures the existing database design, uses knowledge in OO modeling, and then represents them by means of production rules, to guild the pattern extraction algorithm that is applied to perform the data mining process to identify the “data dependency of a process to an object”. The existing Data Dictionary, DFD, and EER model are all useful and hence, are needed for capturing the existing database design to recover the hidden dynamic semantics. Huang et al. (2006) proposed a methodology that focused on how to migrate legacy systems with a well-structured OO analysis to ensure the quality of a reengineered component-based system. This research adopted the well-structured object-oriented analysis to improve the quality
455
Migrating Legacy Information Systems to Web Services Architecture
of reengineered systems. The result of the reengineered system will be a Web-enabled system. The proposed migration approach discusses how to process a well-structured object-oriented analysis. The research considered the following four factors—(1) multi-value attributes, (2) inheritance relationships, (3) functional dependency, and (4) object behavior to ensure the quality of a reengineered component-based system. Further, their study considered the migration from three aspects—data, process, and user interface to make the applied reengineering process more complete. The comparison of all aforementioned reengineering approaches is shown in Table 1, which is shown below. In this comparison table, it is noted that the approach proposed by Huang et al. (2006) could present a more semantic legacy system with a higher quality of components.
are linked together, they can inter-operate to accomplish an even higher level of business function (Herzum & Sims, 1998; Jin, Urban, & Dietrich, 2006; Lee, Pipino, Strong, & Wang, 2004; Vitharana & Jain; 2000; Zhao & Siau, 2007). The scope of a component discussed in this article will include the class and its related interfaces together as a component. By doing so, it can preserve the simple execution function of a component from getting over-complicated. Services are self-describing, open components that can support rapid, low-cost composition of distributed applications (Papazoglou & Georgakopoulos, 2003). Service-Oriented components attempt to fulfill users’ requirement. Consequently, the service providers’ responsibility is mainly to design/develop the most adaptable service processes and components for different users. The users just enjoy the content and quality of the provided service, but do not care about who the true service provider is and how to acquire the service. Some definitions of service argue that they can be implemented by components (Sprott, 2002). However, in a complex environment, a service can actually include several components (Perrey
business component and service component definition Business components typically emulate a specific business entity, such as a customer, an order, or an account. When sets of coherent components Table 1. Comparison of the reengineering approaches Approach Indicator
Sang (2002)
Serrano (1999)
Gall (1995)
Kwan (1999)
Huang (2006)
Proposed Time
2002
1999
1995
1997
2002
Approach
Direct Migration
Direct Migration
Indirect Migration
Indirect Migration
Indirect Migration
How to find objects
codes
codes
ERD
ERD
ERD
How to find methods
codes
Codes
Codes
DFD
DFD
Is component-based
Yes
Yes
No
No
Yes
User Interface Consideration
No
No
No
No
Yes
Aggregation Consideration
No
No
No
No
Yes
Inheritance Consideration
No
No
No
No
Yes
Object Behavior Consideration
Yes
Yes
Yes
Yes
Yes
Multi-value attributes consideration
No
No
No
No
Yes
Quality of ObjectOriented Model
456
Migrating Legacy Information Systems to Web Services Architecture
& Lycett, 2003). Therefore, this article considers that the service is comprised of a number of system components, which could simultaneously interact and integrate with each other. Enterprise has a tendency to keep distance from the composition of too many small and trivial system components and hence has a strong reservation about the operation and management of each other while applying the services. For this reason, this research defines the service to be composed of system components from the perspective of users’ requirements. A service-oriented feature should be an extension of the component-oriented one, and should be aimed at satisfying the user’s situation.
research reengineers the legacy system to various system components from the technical aspect, analyzes the model of the legacy system to attain the business processes from the business aspect, integrates the system components and the business processes together, and then analyzes the service based on processes. Finally, according to this proposed integrated model, it translates the model to the Web services composition language to build a Web services architecture. The following sections discussed below describe each of the five steps of this proposed approach.
mIGrAtInG metHodoloGy
To migrate the system to a loosely-coupled architecture, the OO reengineering approach could be utilized. In the comparison table (Table 1) discussed earlier, the Huang et al. (2006) approach could be utilized to represent more semantics of the legacy system with higher-quality components. The Huang et al. (2006) approach, as visualized in Figure 2, is summarized in the following five
By nature, Web services are different from legacy systems in terms of architecture and the degree of coupling. To migrate the system from a tightlycoupled one or from a loosely-coupled one, the best strategy is to apply OO reengineering. It is noted that the complete Web services architecture generally utilizes the Web services composition language (WSCL) to construct the service components. Most available WSCL are designed based on business process. From the business perspective, most approaches available today merely apply business process to represent the legacy system. To this end, this article will perceive that the business process is one of the main elements of Web services. From the above discussion, this research analyzes the legacy system from two aspects: technical and business. This approach, as visualized in Figure 1, is summarized in five subsequent steps. Traditionally the system designers use the tools and techniques such as ERD, DFD, and user interfaces to describe the entire architecture and then design/develop the information system. These aforementioned models have been applied to analyze the business process of the legacy system and then have been reengineered to the corresponding system components. This
Step 1: Reengineer System Components
Figure 1. Migration methodology
457
Migrating Legacy Information Systems to Web Services Architecture
Figure 2. Object-oriented migration methodology (Source: Huang, Hung, Yen, Li & Wu, 2006)
steps. This study extends Huang’s methodology to reengineering system components. The first activity is to employ the model diagrams of a legacy system such as DFD, ERD, and system interface as different inputs. There are two extended rules associated with this step and can be discussed as follows: •
458
Extended rule 1: In step 1 of Huang’s approach, the fact represented from the data stores and external entities of a DFD can be translated to classes and attributes. In some cases, the DFD itself may also imply a comprehensive system. For this reason, the DFD can consequently be translated to be a system class.
•
Extended rule 2: In step 3 of Huang’s approach, this article performs a further analysis between the functions and external entities (or data store). When a dataflow is connected between two functions, it actually indicates the directional flow of the arrow from one function to another one. It is very similar to the control flow used in EPC.
In this step, the authors regard every dataflow as a method, especially the dataflow between one function and another. Since the aforementioned data flow implies that the IS controls the flow of data in the system, there should be a corresponding method of a system class. An illustrative example is provided in Figure 3.
Migrating Legacy Information Systems to Web Services Architecture
Figure 3. The DFD is translated into several classes
After the completion of this step, we can get the output—component specifications.
Table 2. The conversion methodology for IFBP
Step 2: Analyze the Business Process Due to tight integration with the system components in step 1, this study analyzes business process from a system perspective. This step applies and extends the “Information Flow for Business Process” (IFBP) (Huang & Yu, 2004) method to analyze the business process. Table 2 illustrates the idea of this conversion methodology for IFBP. Again, the first activity is to use DFD of the legacy system as an input. There are three extended rules in this step; they are discussed below and are also shown in Figure 4: •
•
Extended rule 1: The dataflow between one function and another should be translated to the event element and the associated control flow of an EPC. Further, the function can be connected with various events by using the control flow. Extended rule 2: The dataflow connected from a function to a Data Store or an external entity should be translated to be an event and a control flow. The control
•
Steps
Description
Step 1
DFD(Function)maptoEPC(Function)
Step 2
DFD(Data Flow) map to EPC(Event)
Step 3
Using EPC(Control Flow) connect between EPC(Function) and EPC(Event)
Step 4
Using logical symbol (AND, OR, XOR) combine more than one EPC(Event) with EPC(Function)
Step 5
DFD(External Entity) map to EPC(Information Object)
Step 6
DFD(Data store) map to EPC(File)
Step 7
DFD(‘Data Flow’ connect with ‘Data Store’) map to EPC(Information flow)
DFD
EPC
flow is used to connect the function and the event, and the information flow is employed to connect the function and the external entity. Extended rule 3: This rule is similar to the extended rule 2. The dataflow connected from a data store or an external entity to a function should be translated to an event and a control flow. It differs from the aforementioned extended rule 2 in the direction
459
Migrating Legacy Information Systems to Web Services Architecture
Figure 4. The extended rules for IFBP
of the control flow and the information flow. By applying these three rules, we can then get the output—business process (EPC).
Step 3: Integrate System Components and Business Process In this step, this study integrates the system component and business process. The first activity is to use component specification and EPC as inputs. In this step, the authors will integrate processes from their original sources-DFD. The dataflow in DFD is translated to be a method of a class in step 1 and an event in step 2, so that it can be integrated later based on the same source. Afterwards, these aforementioned DFD can be shown in an EPC model, which is noted as the methods of classes. Figure 5 demonstrates the situation to integrate the system components and business processes. Finally, in this step, we can get the output—the integrated EPC model. 460
Figure 5. Integrating the system components and business processes
Migrating Legacy Information Systems to Web Services Architecture
Step 4: Analyze Services This study considers that a service is composed of system components from the perspective of the users who are employing the service. First of all, using the integrated EPC model as inputs, this research utilized the integrated EPC model in step 3 to analyze system services from the users’ perspective. Enterprises using services to apply the interaction between the external entity and other elements in the EPC model. For simplicity, this study analyzes all possible interactive situations between two entities in the EPC model, and then classifies them into seven possible situations introduced as follows. •
•
•
•
•
•
Situation 1: There is no entity in the process. Analyzer should use low-level DFD until it locates the interaction between the entities. The analyzer can translate the whole process into a service. Situation 2: The external entity only inputs information into one function. The analyzer can translate the whole process into a service. Situation 3: Only one function inputs information to the external entity. Again, the analyzer can translate the whole process into a service. Situation 4: The external entity inputs information into the process, and then the process outputs information to another external entity. As presented in Figure 6 and Situation 4, the whole process provides the service for A and B, so the analyzer can translate the whole process into a service. Situation 5: The external entity outputs the process, and then the process inputs information into another external entity. As presented in Figure 6 and Situation 5, A and B are two separate services that do interact with each other. Situation 6: Two external entities input different functions. As presented in Figure
•
6 and Situation 6, A and B are two separate services that do not interact with each other. Situation 7: Two external entities input different functions. As presented in Figure 6 and Situation 7, A and B are two separate services that do not interact with each other. Finally, we can get the outputs - service component specification and servicesoriented EPC model.
After translating the related process to service components, the service components can be presented by the EPC event to repaint the EPC model, as shown in Figure 7.
Step 5: Translate to Web Services Standard After the EPC model has been built, the services components and the services-oriented process need to be translated into a suitable Web services standard. This study translates it to business process execution language for Web services (BPEL4WS), which is one of the Web services
Figure 6. Services analysis
461
Migrating Legacy Information Systems to Web Services Architecture
Figure 7. Translation of services to process
composition languages. The user applies the translated Web services components and BPEL4WS process model to build the Web Service Architecture-based system. In this step, the research does not evaluate which component technology should be used to encapsulate the component, but translates the EPC model to BPEL4WS by using the approach of Huang and Yu (2004). Since a services component includes many associated system components, it is required to encapsulate the system component, process, and interface into a new component.
cAse study As shown in Figure 8, the Prototype system includes three layers (i.e., Interaction layer, Translation layer, and Repository layer). Figure 9 shows the system snapshot of the prototype. The users input the legacy system information (DFD, ERD, user interface) in the prototype system, and then the components translator analyzes this information and stores the component into a Metadata database. The users can get the information flow by the IFBP translator and build the service components as well as services-oriented process by using the WS analyzer. To validate this prototype, this article presents a case study of the reengineering of an Accounts
462
Figure 8. System architecture
Figure 9. The system snapshot of the prototype
Migrating Legacy Information Systems to Web Services Architecture
Figure 10. The DFD of accounts receivable system
Receivable System (ARS). By employing experts’ help, the ERD of the database and DFD are described in Figure 10. According to Step 1, reengineering system component, the seven reengineering components, and eleven reengineered methods are shown on Table 3. This case study named various methods
of components from the original data flow of DFD. In Step 2, the business process of this case study is analyzed from the DFD model of the legacy system—Accounts Receivable System. The output is the EPC model as shown in Figure 11. In Step 3, the business process and system components are integrated, and the methods of system components are employed onto the business process components. It is shown in Figure 12. In Step 4, the services for the integrated model of this case study were analyzed, and the services-oriented architecture was consequently built. It is shown in Figure 13. The prototype implementation combined seven components and eleven methods into two services, as shown in Figure 14. In Step 5, services components and servicesoriented processes need to be translated into a suitable Web services standard. The users can then edit some information about the services components and translate it into business process execution language for Web services (BPEL4WS). Figure 15 demonstrates the system snapshot of this translation and the result.
Table 3. Components and methods of ARS Methods
Sources
Objectives
Type
Components (class)
Receivable Receipt
Receivable Data
Account Detail
Insert, Update, Delete
Account Detail
Account Data
Sale Month Balance
Receivable Data
Insert, Update, Delete
Receivable Data
Receivable Data
Receivable Data
Write-off
Select
Receivable Data
Write-off
Write-off
Receivable Data
Insert, Update, Delete
Receivable Data
Sale Data
Sale Data
Sale Month Balance
Select
Sale Data
Account Age List
Insert, Update, Delete
Account Age List
Write-off
Write-off
Export Sheet
Sale Month Balance
Make Check Sheet
Function to Function
ARS
Receivable Data
Receivable
Write-off
Function to Function
ARS
Export Sheet
Storehouse
Receivable
External Entity to Function
Storehouse
Write-off
Loan
Receivable
Customer
External Entity to Function
Customer
Customer
Receivable
External Entity to Function
Customer
463
Migrating Legacy Information Systems to Web Services Architecture
Figure 11. The EPC of accounts receivable system
Figure 12. Integrated model
464
Migrating Legacy Information Systems to Web Services Architecture
Figure 13. Service-oriented model
Figure 14. Service-oriented model
465
Migrating Legacy Information Systems to Web Services Architecture
Figure 15. System snapshot of translation to Web services standard
In this case study, the legacy accounts receivable system (ARS) is a stand-alone system. In order to provide more flexibility, improve the corporation’s productivity, and enhance the capability to fit into the capricious business environment, the Web services architecture is chosen. By using this proposed migration methodology, the original seven components and eleven methods are, in fact, combined into two services. The proposed approach uses component-based technology and builds the system based on business process. For this reason, it is more suitable for business process and management. This study analyzes service components architecture based on user interaction in the business process, so it will be more closely matched with the users’ requirements.
conclusIon The advancement of information technology is rapid. It is hard for an old architecture to keep up with the changes in the current market. Enterprises need to use the technology of information system reengineering (ISR) to preserve the value of their existing legacy systems. Currently, a Web servicesbased system with Service-Oriented Architecture
466
(SOA) is widely adopted as a solution to reengineer enterprise ISs. Using this architecture, the system will be more flexible and adaptable to fit to the dramatically-changing business environment, and hence, make it easier to integrate additional applications. Obviously, an enterprise will need to have great synergy to establish such a system. The purpose of this research is to propose a migrating solution to translate the architecture of the legacy system to the SOA with a systematic approach. This methodology is different from traditional object-oriented approaches (see Table 4), which migrates the system to be services-oriented without applying general objected-oriented (OO) or functional oriented features. In this study, the system architecture is implemented according to the prototyping development discussed above. The contribution of this research can be summarized as follows. First of all, this study uses a systematic approach to explore the service provided by a legacy system. In specific, this research uses a systematic approach to extract business flows from the DFD diagram of a legacy system, and analyzes the service provided by the system from a service perspective. It is indeed a reasonable fit for today’s Web Service environment. Second, this study has tightly coupled the analyzed services and the rebuilt components, which will
Low
Integrate business processes by mapping the complex data formats such as EDI, SWIFT, COBOL, CSV, flat text, XML, XBRL Enhance the Web-services’ capabilities of the OS/390 mainframe data access product Translate program to COBRA object and then convert to Web services Use forward engineering technique to define business domain and use reverse engineering technique to understand the function of a legacy system; use of the Component Definition Language (CDL) to specify both legacy and business objects in order to facilitate a search for matching objects and parameters of Web-services with legacy objects
GoXML Transform Server http://www.goxml.com/features.php
Xbridge Host Data Connect http://www.xbridgesystems.com/
IONA Orbix http://www.iona.com/products/orbix/
BALES (Heuvel, Hillegersberg, & Papazoglou, 2002)
High
High
Utilize the wrapping technology to translate legacy system into Web services
EXTES Xuras http://www.beacon-it.co.jp/products/pro_serv/eai/xuras/index.shtml
High
Normal
Delivers J2EE™ standards-based access to CICS applications
IBM CICS Transaction Gateway http://www-306.ibm.com/software/htp/cics/ctg/
This study
High
Employ wrapping technology translate legacy system to Web services
Software AG EntireX http://www.softwareag.com/Corporate/products/entirex/default.asp
Use a systematic approach to explore the service provided by a legacy system, integrate system component and business process, and generates the appropriate WSDL files
Normal
Utilize the screen-scraping technology; getting the legacy system interface definitions as input and generates the appropriate WSDL files
Cape Connect http://www.capeclear.com/products/
Low
Low
High
Support different platform
Principle
Feature Product/ methodology
Low
Normal
Low
Low
Normal
Low
Normal
Low
High
Flexibility
Table 4 Comparison between the proposed methodology and other similar products and methodologies
Low
Yes (only focus on functions reference)
No
No
No
No
No
No
No
Support business process
Yes (using ERD, DFD, user interface to understand the business process)
Normal
Low
Low
Normal
Low
Normal
Low
Reusability
Migrating Legacy Information Systems to Web Services Architecture
467
Migrating Legacy Information Systems to Web Services Architecture
be easily extracted by the legacy system. Consequently, this proposed approach will create a more compact system, which may increase the operational efficiency. Third, the proposed methodology presented in this study can help locate the shared semantic between system components and business processes, and use a systematic method to integrate the system component and business process tightly by carefully analyzing the process. Furthermore, this method can be employed to build a methodology to translate the traditional structured analysis to a corresponding serviceoriented framework. This study analyzed the DFD, ERD, and system user interface to develop business processes and system components. In addition, this study analyzed service to construct a systematic approach, which can be utilized to translate the model diagram of a legacy system into a services-oriented EPC model. The specifications of the Web Service components can be easily built in this case, which in turn can help users to construct Web Service components more efficiently. In addition, the IS reengineering technique can be utilized to build the prototype based on the proposed Web Services framework. To this end, this study built a prototype to validate this proposed methodology. This prototype can guide users to reengineer the legacy system into a Web Service framework. As a result, this research can be a valuable reference for other future studies. This study adopted a Rule-Based translation methodology, which can be used to improve not only the quality of the design/development, but also to enhance the correctness of the original data. Some ambiguous or vague wordings presented in the original DFD can definitely influence the quality of a translated process. It is obvious that every phase should be properly revised, so that the output would better fit the actual practice performed in the businesses and/or industries. In practice, some organizations can either have incomplete DFDs or even have no DFDs to perform the system/service design. This study cannot be applied to such organizations. Fortunately, techniques such as IS reverse engineering have 468
gained considerable progress in translating source code to information flow such as DFD (Benedusi, Cimitile, & De Carlini, 1989; Lakhotia, 1995) or translating into ERD diagram from database (Computer Associates, Inc., 2006; SyBase, Inc., 2007). This study placed focus on the conversion methodology, and paid limited attention to the techniques to develop system components, such as EJB, CORBA, or COM+. Practically, different composition techniques may have different ways to implement the Web Services. To this end, this study lacks any discussion of the different implementation alternatives to design/develop the Web Services. The proposed methodology is tailored to IS’ assistance to various business operations. It works better in situations such as having complicated processes and external transactions. As a result, it does not have sufficient capability in dealing with a simple process or some applications without process. Furthermore, other system programming and firmware that require specific hardware cannot be easily applied with this study. Future work needs to be done towards a full automatic reengineering process to eliminate the unexpected human factors and/or possible errors. Further study should also be investigated to determine the right component implementation technique. In addition, it is also required to enhance the participation of domain knowledge experts. In the future, additional benefit may include the development of new applications by composing these software components found in the reengineering process in order to accelerate the development time.
AcknoWledGment The National Science Council, Taiwan, under Grant No. NSC95-2524-S-194-004-, has supported the work presented in this article. We greatly appreciate its financial support and encouragement.
Migrating Legacy Information Systems to Web Services Architecture
reFerences Basu, A., & Kumar, A. (2002). Research commentary: Workflow management issues in ebusiness. Information Systems Research, 13(1), 1–14. doi:10.1287/isre.13.1.1.94 Benedusi, P., Cimitile, A., & De Carlini, U. (1989). A reverse engineering methodology to reconstruct hierarchical data flow diagrams for software maintenance. Proceedings of the IEEE International Conference on Software Maintenance (pp. 180-189). Bouguettaya, A., Malik, Z., Rezgui, A., & Korff, L. (2006). A Scalable Middleware for Web Databases. Journal of Database Management, 17(4), 20–46. Brereton, P., & Budgen, D. (2000). Componentbased systems: A classification of issues. IEEE Computer, 33(11), 54–62. Carroll, N. L., & Calvo, R. A. (2004, July 5). Querying data from distributed heterogeneous database systems through Web services. Proceedings of the Tenth Australian World Wide Web Conference (AUSWEB 04). Chen, Q., Hsu, M., & Mehta, V. (2003). How public conversation management integrated with local business process management. Proceedings of the IEEE International Conference on E-Commerce, CEC, 2003, 199–206. doi:10.1109/ COEC.2003.1210250 Chen, Y., Zhou, L., & Zhang, D. (2006). Ontology-Supported Web Service Composition: An Approach to Service-Oriented Knowledge Management in Corporate Services. Journal of Database Management, 17(1), 67–84. Computer Associates, Inc. (2006, October 6). AllFusion Erwin Data Modeler. Curbera, F., Khalef, R., Mukhi, N., Tai, S., & Weerawarana, S. (2003, October). The next step in Web services. Communications of the ACM, 46(10), 29–34. doi:10.1145/944217.944234
Erickson, J., Lyytinen, K., & Siau, K. (2005). Agile Modeling, Agile Software Development, and Extreme Programming: The State of Research. Journal of Database Management, 16(4), 88–100. Erl, T. (2005). Service-oriented architecture: Concepts, technology, and design. Upper Saddle River, NJ: Prentice Hall. Fong, J., Karlapalem, K., Li, Q., & Kwan, I. (1999). Methodology of schema integration for new database applications: A practitioner‘s approach. Journal of Database Management, 10(1), 3–18. Gall, H., Klosch, R., & Mittermeir, R. (1995). Object-oriented re-architecturing. Proceedings of the 5th European Software Engineering Conference (ESEC ‘95). Herzum, P., & Sims, O. (1998). The business component approach. Proceedings of OOPSLA’98 Business Object Workshop IV. Heuvel, W. V. D., Hillegersberg, J. V., & Papazoglou, M. (2002). A methodology to support Webservices development using legacy systems. IFIP Conference Proceedings; Vol. 231, Proceedings of the IFIP TC8 / WG8.1 Working Conference on Engineering Information Systems in the Internet Context (pp. 81-103). Huang, S. M., Hung, S. Y., Yen, D., Li, S. H., & Wu, C. J. (2006). Enterprise application system reengineering: A business component approach. Journal of Database Management, 17(3), 66–91. Huang, S. M., & Yu, F. Y. (2004). IFBP: A methodology for business process discovery based on information flow. Journal of International Management, 11(3), 55–78. IBM. BEA Systems, Microsoft, SAPAG, & Siebel Systems (2002, July 30). Business Process Execution Language for Web Services, Version 1.1.
469
Migrating Legacy Information Systems to Web Services Architecture
Jin, Y., Urban, S. D., & Dietrich, S. W. (2006). Extending the OBJECTIVE Benchmark for Evaluation of Active Rules in a Distributed Component Integration Environment. Journal of Database Management, 17(4), 47–69. Kim, H. M., Sengupta, A., Fox, M. S., & Dalkilic, M. (2007). A measurement ontology generalizable for emerging domain applications on the semantic Web. Journal of Database Management, 18(1), 20–42. Kwan, I., & Li, Q. (1999). A hybrid approach to convert relational schema to object-oriented schema. International Journal of Information Science, 117, 201–241. Lakhotia, A. (1995, February). Wolf: A tool to recover dataflow oriented design from source code. Proceedings of the Fifth Annual Workshop on Systems Reengineering Technology. Lee, Y. W., Pipino, L., Strong, D. M., & Wang, R. Y. (2004). Process-embedded data integrity. Journal of Database Management, 15(1), 87–103. Liu, Y., Fekete, A., & Gorton, I. (2005). Designlevel performance prediction of component-based applications. IEEE Transactions on Software Engineering, 31(11), 928–934. doi:10.1109/ TSE.2005.127 McArthur, K., Saiedian, H., & Zand, M. (2002). An evaluation of the impact of component–based architectures on software reusability. Information and Software Technology, 44(6), 351–359. doi:10.1016/S0950-5849(02)00020-4 Microsoft (2007, April 16). COM: Component object model technologies. OMG. (1995). Common Object Request Broker Architecture. Ommering, R. V. (2005). Software reuse in product populations. IEEE Transactions on Software Engineering, 31(7), 537–544. doi:10.1109/ TSE.2005.84
470
Papazoglou, M. P., & Georgakopoulos, D. (2003). Service-oriented computing. Communications of the ACM, 46, 25–28. doi:10.1145/944217.944233 Perrey, R., & Lycett, M. (2003, January). Serviceoriented architecture. Proceedings of the 2003 Symposium on Applications and the Internet Workshops (pp. 27-31). Rahayu, J. W., Chang, E., Dillon, T. S., & Taniar, D. (2000). A methodology for transforming inheritance relationships in an object-oriented conceptual model to relational tables. Information and Software Technology, 42(8), 571–592. doi:10.1016/S0950-5849(00)00103-8 Sang, J., Follen, G., Kim, C., & Lopez, I. (2002). Development of CORBA-based engineering applications from legacy Fortran programs. Information and Software Technology, 44(3), 175–184. doi:10.1016/S0950-5849(02)00005-8 Serrano, M. A., Montes, D. O., & Carver, D. L. (1999). Evolutionary migration of legacy systems to an object-based distributed environment. Proceedings of the IEEE International Conference on Software Maintenance (ICSM’99) (pp. 86-95). Sprott, D. (2002). Service-oriented process matters. CBDi Forum Newsletter. Stal, M. (2002). Web services: Beyond component-based computing association for computing machinery. Communications of the ACM, 45(10), 71–77. doi:10.1145/570907.570934 Sun - Java EE. (2007, April 16). Enterprise JavaBeans Technology. Sutor, B. (2006, May 21). Open standards vs. open source: How to think about software, standards, and service-oriented architecture at the beginning of the 21st century. SyBase, Inc. (2007, April 16). Sybase Power Designer Redefining Enterprise Modeling.
Migrating Legacy Information Systems to Web Services Architecture
Vanston, M. (2005, August 21). Integrating legacy systems with Web services. The Meta Group Inc. Vinoski, S. (1997, February). CORBA. Integrating diverse applications within distributed heterogeneous environments. IEEE Communications Magazine, 14(2), 46–55. doi:10.1109/35.565655 Vitharana, P., & Jain, H. (2000). Research issues in testing business components. Information & Management, 37(6), 297–309. doi:10.1016/ S0378-7206(99)00056-7
Waguespack, L., & Schiano, W. T. (2004). Component-based IS architecture. Information Systems Management, 21(3), 53–60. doi:10.120 1/1078/44432.21.3.20040601/82477.8 Zhao, L., & Siau, K. (2007). Information mediation using metamodels—An approach using XML and common warehouse metamodel. Journal of Database Management, 18(3), 69–82.
This work was previously published in Advanced Principles for Improving Database Design, Systems Modeling, and Software Development, edited by Keng Siau and John Erickson, pp. 282-306, copyright 2009 by Information Science Reference (an imprint of IGI Global).
471
472
Chapter 2.9
EIS for Consumers Classification and Support Decision Making in a Power Utility Database Juan Ignacio Guerrero Alonso University of Seville, Spain Carlos León de Mora University of Seville, Spain Félix Biscarri Triviño University of Seville, Spain Iñigo Monedero Goicoechea University of Seville, Spain Jesús Biscarri Triviño University of Seville, Spain Rocío Millán University of Seville, Spain
AbstrAct The increasing of the storage system capacity and the reduction of the access time have allowed the development of new technologies which have afforded solutions for the automatic treatment of great databases. In this chapter a methodology to create Enterprise Information Systems which are capable of using all information available about customers is proposed. As example of utilization of this methodology, an Enterprise Information System for classification of customer problems is DOI: 10.4018/978-1-61520-625-4.ch008
proposed. This EIS implements several technologies. Data Warehousing and Data Mining are two technologies which can analyze automatically corporative databases. Integration of these two technologies is proposed by the present work together with a rule based expert system to classify the utility consumption through the information stored in corporative databases.
IntroductIon Enterprise Information Systems (EIS) are applications that provide high quality services by means
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
EIS for Consumers Classification and Support Decision Making in a Power Utility Database
of a treatment of great volumes of information. Frequently, these processes include artificial intelligence methods or any knowledge-discovery technology. Enterprise Information Systems can integrate any technology that helps in the information treatment, in this way turn into Integrated Systems consisting of several modules that work jointly to solve a certain problem. The great quantity of methodologies and technologies that have appeared for EIS development, have allowed the proliferation in many markets. This situation has provoked the diversification of the EIS, depending on the goal that they search for and on how EIS comes close to it. In this paper, an Enterprise Information System that integrates knowledge to help the human experts in the making decision, called Decision Support System (DSS) is proposed. This kind of systems is very useful for the utilities distribution companies. This kind of companies has several similar characteristics. For example, the consumption in water, power or gas utility is hardly controlled. The company installs measure equipments to register the client consumption and, in some case, it adds control equipments to avoid the overloads. Normally, these equipments are property of utility company and its manipulation without company authorization is illegal. In order to show the proposed DSS generic methodology, an example of its application is showed in the case of a power utility. This DSS example try to help in the non-technical loss classification process. Mainly, the utilities present two classes of incidents: •
Technical losses. These losses are produced in distribution stage. In the power distribution companies, they correspond with energy losses: ◦ Wire warming (Joule Effect). ◦ Distribution facility blemishes. ◦ Natural reasons.
•
Non-technical losses. This type of incidents represents, faults and/or manipulations on the installation that induce the total or partial absence or modification of the consumption on the company side. If the company cannot control the consumption correctly it is not possible to invoice the utility and, therefore, an economic loss is produced.
Nowadays, companies have predictive systems of technical losses that work with a very low mistake percentage; because normally they are based on physical and climatic calculations. On the contrary, the non technical losses are very difficult to detect and control. Normally, the more common non-technical are: •
•
Anomalies. They are characterized itself by breakdowns or mistakes by the company installation technical personnel or by deterioration of the client facilities. Frauds. They are inadequate manipulations realized by the clients in their installation, with the objective to modify for their own profit the energy that is registered on the meter.
In most of the references (see ‘Overview and fraud detection’ section), this detection type is realized treating the client’s consumption and more characteristics, such the economic sector and the geographic location. Nevertheless, on the corporate databases there exists a lot of information that includes: • • • •
Client information. Contract information. Client facilities technical specifications. Results and commentaries realized by the company inspectors and technicians.
According to the company, it is possible that more information exists.
473
EIS for Consumers Classification and Support Decision Making in a Power Utility Database
The joint treatment of all this information allows the detection and classification procedures of non-technical losses. This treatment demands the utilization of diverse technologies to adapt the procedures to the information type that is used. At the same time, it is necessary to design integration methods to construct the Information System. In the present chapter a general revision will be done about the Information System checking each of the following points: • •
• • •
• •
Objective of the present paper. The form in which companies realize the activities related with client consumption control. The methodology adopted by the Information System development. The information management methods that companies have. The Information System basic architecture joined with the integration limits that are established, followed by all module descriptions. Information System verification and validation methods. Presentation of experimental results obtained.
obJectIves The EIS main objective consists of the accomplishment of an exhaustive client analysis, using the most available information. This EIS proposes a classification of the different clients analyzed, depending on the found incidents, in this way, it supports the inspector as a Decision Support System on using the knowledge acquired to realize a classification, depending on the obtained results. To obtain this goal, the EIS must take advantage of all the client information available and apply the inspector knowledge to this information, in search of any anomaly or fraud that could mask a problem in the client installation (non-technical loss).
474
Most of the studies realized till now use a lot of information related to the client consumption and the economic activity. The present paper raises the need to use all the available information about the client because the utility company distribution commercial systems have more information available, not only the information relating to consumption. Due to this need, it will be necessary to first perform an available information study and to use the inspector knowledge to determine what information is interesting. Once decided it is necessary to determine what technology is most adequate to extract the knowledge necessary for the analysis of each information type. The company inspector and expert knowledge includes from the structure and information contained in the company commercial system, up to the procedures and devices of the different existing facilities. The available information in utility distribution companies has similarity characteristics and it is possible to apply the same techniques to treat the available information.
overvIeW on FrAud detectIon Normally, the existing references limit themselves to the information treatment about the consumption. There exist three precedents directly related to the topic that is shown in this chapter: ▪
▪
F. Biscarri et al. (Biscarri, 2008) proposed different artificial intelligence techniques and statistical methods for non-technical losses detection. These techniques apply various methods that allow the detection of anomalous consumption patterns. J.R. Galván et al. (Galván, 1998) proposed a technology based on radial basis neural networks, using only the consumption evolution in monthly periods. Also it uses the economic sector indirectly, since is the test only uses agricultural clients.
EIS for Consumers Classification and Support Decision Making in a Power Utility Database
▪
José E. Cabral (Cabral, 2004; Cabral, 2006) proposed the application of rough sets to classify categorized attribute values. This one is the only found offer capable of taking advantage of a lot of information from the databases, this system works with dicretized information, therefore, it is necessary to design discretization processes in the information that possesses continuous values.
Another one of the technologies used often for the fraud detection is the forecasting using artificial neural networks (ANN) with different architectures combined with other methods like: data mining (Wheeler, 2000), fuzzy logic (Lau, 2008) and temporal series (Azadeh, 2007). This application has a wide representation nowadays since it is used both for energy and for finance (Hand, 2001; Kou, 2004) and telecommunications (Daskalaki, 2003). The forecasting load subject is much more extensive since it includes short (Hobbs, 1998), medium (Gavrilas, 2001) and long (Padmakumari, 1999) forecasting methods, but it is used for client sets and not for detecting frauds in specific certain client. In other articles there appears the possibility of adding climatic parameters to increase the forecasting efficiency (Shakouri, 2007).
utIlIty dIstrIbutIon comPAnIes All utility distribution companies have measure equipment installation and information about client consumption. Concretely, in specific case of power utilities there are three types of samples. These samples depend on installed tension (low, medium or high). Normally, the low tension client consumption is more difficult to control. Power utilities have million of clients with low tension contracts. To control the consumption of all clients there are mainly two methods:
•
•
Telemeasuring. This type needs that in the client installation there be placed MODEMs that communicate the measuring of telematic form to the distribution company. Manual measuring. This type needs that a company employee must visit each and every of the registers to take the measure manually.
The telemeasuring equipment provides a more exhaustive control of the measurements. Nevertheless, the manual measuring is normally taken monthly or bi-monthly, which is what normally happens in clients of low tension. This complicates the client consumption analysis, because it reduces the consumption information and it is on this that the present paper has been tested. An activity that the inspectors and technical staff realize of the distribution companies are defined by the companies by means of procedures. In these procedures there are defined actions that the inspector or technician must do according to the goal that they pursue. When inspectors or technicians visit the client facilities they must perform the actions that are specified in a procedure, having ended it, must register it in the company commercial system and the results. Normally, the inspectors or technicians have the possibility of adding a commentary or some observation. This information provides a great advantage on the methods and technologies that only use the client consumption information, since they give very important information that normally would not be available in the numeric information on the company databases. When an inspector finds a fraud or anomaly in the client installation this must be communicated to the company, which should create a process that stores all the information relating to the fraud or anomaly from its detection until it is corrected. These processes are very important in the research because of it can be used to determine the efficiency of the automatic methods.
475
EIS for Consumers Classification and Support Decision Making in a Power Utility Database
Figure 1. Methodology diagram
All these characteristics are normally common in all existing utilities distribution companies who have a large number of clients.
busIness mAnAGement InFormAtIon All the utility distribution companies store great quantities of information about clients. Normally these type of companies need a great infrastructure to support and to manage all this information. At the same time, the infrastructure allows the company personnel to add, to modify or to eliminate information on the company database. These type of processes are the most common and are more numerous than realized on the company database. The companies cannot interrupt these processes because it might provoke an economic loss. Nevertheless, it is necessary to do a series of processes that need a bigger load on the database, it is mainly performed during the night or at weekends, when client administrative activities are not necessary. Another solution why they choose the companies is Datamarts creation. These represent small images of the original database so that it 476
adapts to the needs of the activity that is going to be used. These Datamarts do not allow it to be employed with online information, but it is updated in short predefined or even incremental periods. Normally, the update processes also are realized during periods of inactivity. In addition, also periodically, to perform massive reviews on a set of clients who satisfy series of conditions, so it is necessary to realize batch processes that are executed on nightly schedules and initiate the necessary procedures. This one is the expedient main source for the companies, since normally they are created by inspectors who search a certain pattern.
metHodoloGy The methodology used for this kind of EIS must consider the existence of two types of processes: research and development processes. This methodology could be used for implementing generic EIS to analysis any kind of information. In this chapter, this methodology is used to make a system for treat available information of utility distribution companies in order to make a classification of losses.
EIS for Consumers Classification and Support Decision Making in a Power Utility Database
In the figure 1 the evolutionary methodology type proposed is shown. Later, each of the phases is described briefly: •
•
•
•
•
•
Identification. In this phase, the familiarization by the vocabulary and aspects related to the problem subject is made. Knowledge extraction. This is a research phase. It consists of two stages, which can be realized in parallel: ◦ Knowledge Acquisiton. The researchers obtain the necessary knowledge to perform the consumers’ analysis using the meetings knowledge acquisitions with the experts. Also, one seeks to perform a design of the group of tests and the validation criteria for EIS. ◦ Information Review. The researchers familiarize themselves with the available information on the databases, which determine the usable information for the solution. Information Preprocessing. In this stage the preprocessing of the available information in the database is realized in order to facilitate the adaptation of the same one and to allow the design and structure of the necessary databases. In addition, it establishes the design of preprocessing techniques that can be used to solve the problem. Knowledge Assimilation. It has two research stages: ◦ Knowledge Formalization. It establishes the design and structure of knowledge base. ◦ Information Modelling. Modelling algorithms to the preprocessed information are applied. Implementation. The development and codification process of the knowledge base and modeling process. Validation, Verification and Integration. The acquired knowledge validation is
•
done in the extraction phase, by means of expert validation. The verification tests are designed in the knowledge extraction stage, establishing the evaluation criteria with the company. The different technology integration tests will allow the fitting the different technology processes of the different technologies, in agreement with the evaluation criteria established, it will be carried out at the same time as the processes of verification, since the unification of the different technologies for the correct functioning of the EIS is necessary. Implantation and Maintenance. It is necessary to make a implantation and maintenance plans that allows to the company assimilate the new EIS.
This methodology has been created from the existing generic methodologies of each one of the most common technologies, trying to combine the different activities accomplishment of development and investigation. In this type of projects in which competitive companies have been involved, it is necessary to establish additional processes that allow a control investigation, establishing criteria that determines when a development is necessary and when an investigation phase has been satisfactorily completed. The example that is shown in this paper presents the application of this methodology. EIS for classification losses in power utilities is made. This system could be applied to other utilities.
InteGrAtIon In general, the integration of different modules is necessary to solve some questions: •
The output format of each module for integrating must verify a set of conditions to allow the utilization inside a solid integration. To solve this problem there are two options:
477
EIS for Consumers Classification and Support Decision Making in a Power Utility Database
◦
•
•
•
478
The design of translation intermediate processes is one of the most used options, since it allows supporting a modular design. The disadvantage of this module types is that in applications that use artificial intelligence, they provoke the need to modify the translation intermediate module to integrate a new module. Another disadvantage of this solution is the increase of the execution time. ◦ The design of modules that obtain the results in the necessary format. This option has the main disadvantage of forcing design specific modules. Selecting how one is going to direct the Information System execution. Traditionally, two options existed: centralized or distributed. With the use of artificial intelligence technologies, modules that use the knowledge extracted by other modules to obtain new knowledge or as help in the decision taking can be designed. This raises a distributed control system at every module that is employed independently, but with the existence of an implicit relation, because it is the set of all modules that allows the decision making process. Determining the way in which the Information System will communicate with the user. In this sense it is necessary to define a series of questions: ◦ How is one going to present the information to the user? ◦ What information is one going to present to the user? ◦ In what levels can the user interact? Determining the synchronization way of the different modules: ◦ Synchronous modules. It is necessary to perform the coordination of the different modules according to execution times. In artificial intelligence problems this solution type is very
◦
◦
complex, because if heuristic search processes (typical in artificial intelligence) are used it is very complicated to determine the execution time. Asynchronous modules. In this case it would be necessary to establish a few minimal conditions of functioning for each of the modules, so that the taking of decisions could be done without needing the modules to be done completely. Logically, the results will be more efficient if the modules have performed their jobs. Sequential modules. An execution order is established among the different modules. This is the most common, used in artificial intelligence processes.
These questions raise a series of limits that force taking certain decisions in the design and development of each module. In some proposed sample aspects, combined solutions have been chosen, for example, only in certain occasions are they going to use intermediate processes (middleware) because normally the utilization of fixed modules will be more useful. In the interaction with the user, it proposes the utilization of reports with graphs to present the information related to the client with a possible conclusion. This conclusion is designated by means of the client classifications in different categories that identify the problem related to the client. With this information the expert can determine if he/she agrees or not with the classification, performing the System fixing in case it is necessary. In execution, initially the sequential way has been chosen, but thanks to the adopted structures and the used technologies it is possible with small modifications to realize a System capable of working in an asynchronous form and of allowing the work with incremental load.
EIS for Consumers Classification and Support Decision Making in a Power Utility Database
Figure 2. Generic architecture
In this kind of projects is necessary to use the best applications for each technology. But, in some case, it will be necessary to use only one application, that integrates all technologies. This application is oriented to information treatment of great volume. Normally, this condition is associated to a project that is limited to one or more applications
bAsIc ArcHItecture The Information System basic architecture is based mainly on: •
•
The type of information that is going to treat. The nature of the information to treat will determine what type of technology is most adapted to analyze it, since the same methods for numerical and alphanumeric information directly cannot be used. The way the information is presented to the user. The activities that the employee
•
develops, determines which is the most suitable information presenting form. The company infrastructure. The Information System must be measured correctly to allow the implantation in the company infrastructure.
These conditions are the same for all utilities distribution companies, because they use very similar information and have closed volume of clients. To determine the influence of these three factors it is necessary to do a knowledge extraction phase, selecting information parts that can contribute something in the classification and detection process of non-technical losses. In this database types there exists information of different natures: numerical, date and alphanumeric. For each of them, it is necessary to investigate the procedure or technology most adapted for the extraction and treatment. In this respect, the typical information in the utility distribution companies are:
479
EIS for Consumers Classification and Support Decision Making in a Power Utility Database
• • • •
Client and contract data. Facility information. Consumption information. Inspector documentation commentaries.
and
Once the necessary information is identified the procedures for the treatment and analysis could be designed. So the different modules that compound this architecture are shown in Figure 2. The process begins with the different source information existence, that in some cases, will be necessary to apply preprocessing methods, because of possible problems existing in the information getting. Information System needs user interface to show the getting of results, this module is essential, due to DSS Enterprise Information System need some interaction form with the user. In the same form, the possibility in which an administrator can modify or update the system, in some exceptional times must exist. This interface gives access to each of the modules that make part of the EIS. Each one of these modules (1, 2, 3, …) performs a treatment on the information that allows adding new data to the knowledge database. The goal of each one is to help on the fitting phase and on adaptation to different problems, decreasing the maximum possible interaction by the administrator. This information is treated by the DSS module in the decision process according to the stored information.
The realized studies on large amount of information sets are not useful, due to the long time process needed to get results. In addition, the companies need many resources for the information maintenance normal job and new client updates. All these reasons force the decrease of the study sample, allow the extraction in inactivity periods (night schedule) and the execution reduction time of the different studies. The previous information extraction is used creating offline databases with all the necessary information, although it is not updated, it allows the acceleration of the investigation and depends on the commercial databases working locally.
ProPosed ArcHItecture The proposed architecture for utility distribution company client analysis has the following modules: •
•
InFormAtIon PreProcessInG The System has to use the necessary information to perform the analysis. The different development and investigation phases are intended to determine what information is used in the analysis and what information has be made in order that the Information System uses it.
480
•
Data Warehousing for the treatment of great volume of information available in power utility companies databases. These techniques improve the state and quality of information, verify the coherence, the integrity and several kind of errors (format errors, incorrect values, etc.). Data Mining for making the power consumption studies; the trends and ranges of consumption are established by means of statistical techniques. The studies of consumption ranges are carried out through the application of a statistic study which searches normal pattern behaviour, but taking into consideration a series of criterium which allow the distinction of one type of consumptions from the other. Text Mining for analyzing the documentation of inspections made in electrical installation clients. Initially, this module is made based on experience, using concept
EIS for Consumers Classification and Support Decision Making in a Power Utility Database
Figure 3. Integration method
•
extraction processes on documentation and inspection commentaries of client facilities. These concepts are organized in several categories that identify several events in client facilities. Auxiliary tools realize the follow-up of the client who presents certain characteristics.
The Rule Based Expert System is used for coordinating the results of the different modules and to take the decision about the client’s final classification. The expert system uses some rule sets classified in 7 different groups according to they function. The rules have a structure IFTHEN-ELSE. These 7 rule groups allow implementing the integration between the modules and the decision system (in this case, the expert system). As is shown in Figure 3, the modules allow extracting knowledge, that it generates the antecedent of the rules in a dynamic way. This EIS has 135 static rules. But client analysis may apply round 500 rules, adding the rules with dynamic antecedent.
rule based expert system The Information System is mainly structured using as a basis a rule based expert system. Each one of the rules is used to analyze a selected aspect about the client; in this way there exists seven groups of rules that try to check all the available information aspect: consumption, contract, installation, etc. To access quickly the information and to allow the analysis of large client volumes, Data Warehousing techniques are used. This system uses the Ralph Kimball (Kimball, 2002) point of view to create a fact table in which the analysis subject objective is sorted, from it, the client contracts who want to analyze it is extracted. Around this table work the expert system applies the rules that allow clustering the client according to the problem the client presents.
data mining The information system is used like an additional classification system, complementing the previous studies that are realized, using mainly the consumption anomalous pattern detection by means of statistical techniques, neural networks and artificial intelligence. The anomalous consumption patterns are varied in range. Due to this, it 481
EIS for Consumers Classification and Support Decision Making in a Power Utility Database
is necessary to establish additional criterions to allow the analysis of large quantity of information. The Data Mining process in this EIS is limited to the implementation of a series of statistical techniques for combined rates and trends of normal consumption. This module proposes the idea of a study of normal consumption patterns of customers. This operation can not be performed directly. The methods applied in this EIS began with a pre-filter that tries to remove clients who have allegedly anomalous consumption. After the removal with the statistical study of the resulting sample, of customer begins. It is necessary to do this study on the largest possible number of customers since the availability of more number of clients in the sample is statistically representative. There is a problem applying this method on all clients. For example, a customer who has a pub with a capacity exceeding 15 kW on the coast, does not consume like a warehouse or another pub located in interior places. Thus arises the need for some kind of division that provides normal patterns of consumption for each of the desired characteristics. During the various investigations and tests, it has been concluded that there are a number of key information fields for determining acceptable consumption standard patterns: geographical location, economic activity, billing frequency, time discrimination and contracted power. In addition, due to the different patterns that may be found necessary to establish a temporary division in the information for consumption patterns: total annual, seasonal and monthly. These groups provide us with divisions within which are defined normal conditioned patterns by the client characteristics. The customer consumption comparison who possess the relevant groups with the same features will provide us an idea of client normal consumption. As can be inferred, the division of a sample can lead to groups that have no statistical significance. Because of this it is necessary to carry out two processes, not to solve the problem completely,
482
but it does allow use of information in most of the groups: 1.
2.
3.
Discretized values of the powers to enable that within each group there is a greater representation. The study on the largest possible number of customers, it also may have increased statistical representation in each group, although it requires a higher processing capacity. Perform different groups, so that there are groups that contain only 4 of those specified characteristics, and others that use 5 or 6 of these features. This way, if the most stringent statistical entity is not sufficient, it can be compared with less restrictive for a rough estimation of client normal consumption.
This study also provides additional information about the behaviour of each of the groups studied, as it can be checked whether a certain group of customers presented a seasonal consumption or an irregular consumption. Moreover, this study along with the text mining, which will be described in the next section, provides an inherent capability of automatic adaptation to the characteristics of the sample to be analyzed, because if you study as many customers as possible, this will provide information on all possible cases analysable. Currently, this study must be made on virtually all existing customers in the company, and it has provided enough information to analyze different types of customers, except in some groups where there is not enough information.
text mining The purpose of the Text Mining process is to analyze the content of comments added by different inspectors or alphanumeric fields. These provide useful information in analyzing the consumption of the clients as they provide additional data on the status of the facility and the inspector observations.
EIS for Consumers Classification and Support Decision Making in a Power Utility Database
The techniques used in this process are reduced to the extraction and classification of concepts. The concepts are a set of one or more words that represent an idea, event, action or real object. In this way, a concept can be anything from a simple word to a full term. The classification of these concepts has been done according to the experience of the inspectors and fuzzy techniques to detect similarities between different forms of spelling. The classification of these terms is based on problems that you want to identify. Thus, this information is used to verify the conclusions of the analysis of customers. Not all clients have this information. In such cases, the information system can provide all that is additional information on the client consumption and different incidents identified in the information. As the implementation result of Text Mining techniques, this yields a dictionary of concepts to be added to the knowledge base information system. This EIS process was particularly complex because the initial classification was made according to the knowledge provided by experts and inspectors, and has been a large volume of work of classification and verification.
be missing if there is not a successful conversion of information. Validation is a procedure for determining whether the information system does the work for which it was created in a robust and effective. You have to define, first, estimated limits of efficiency to be achieved and make different prototypes, refining the information system to equal or exceed the objectives. The objectives for information systems are defined in terms of time cost, robustness and efficiency. In general, it is preferable to obtain systems with a temporary low-cost, high efficiency and ruggedness, looking for a good balance, because a very robust system is usually less efficient and temporary high cost. In the sense of efficacy, validation of this information system and other systems in the utility distribution subject is quite complex. There are mainly two methods to validate the different prototypes that are created: •
verIFIcAtIon And vAlIdAtIon metHodoloGy The checking is whether the information system performs the analysis without losing information and without making mistakes in this process. This requires reviewing the results of each module separately and Information System as a whole. Using the EIS cyclically in cases pre-designed, we can fix the system so as not to lose information and calculations are performed correctly. The loss of information is one of the most common problems in systems that are created by the merger of several modules. If not properly designed and constructed, to two modules may
•
Perform analysis of data from a closed set of customers and, through inspection, to determine whether there is a non-technical loss in the client facilities. This method is quite common in some references (Cabral, 2004; Cabral, 2006). It presents several problems: ◦ The method is very inefficient, because the inspections need time, because he/she must visit them one by one and complete the related documentation. ◦ The amount of clients that can be reviewed is quite small, since more clients will take longer to get results. This decreases the probability of finding a client with a non-technical loss reduction by decreasing the entire study. Using two samples of the same clients that are extracted in different moments of time.
483
EIS for Consumers Classification and Support Decision Making in a Power Utility Database
This way you can see which clients have a problem or if they are correct or have not been visited. The procedure is to analyze the information from the oldest sample and compare the results with the new sample. This comparison is an investigation process and we must determine whether the detected non-technical losses in the most recent match what the analysis concludes. Moreover, the possibility of refining the system using the client wrongly classified (as we have information on clients who have not confirmed a non-technical loss). However, this technique presents a problem: it is not possible to establish any conclusion on the remaining clients of which there is no information on the new sample. In the case of this sample of Enterprise Information System the second option described has been proposed, choosing a closed set of customers in two separate moments of time. This method has allowed the refinement of the Information System in order to achieve the correct margins classifications proposed. There is an additional problem, which occurs in the two methods and problems in the information. For example, in some cases, companies have taken measurement of the client facilities, in such cases and where the problem is prolonged in time, may cause a non-technical loss. In fact, this may mask a fraud. The Enterprise Information System use within the complete project validation procedure used is the first method described.
exPerImentAl results This system analyzes the information to potential clients identified with non-technical losses through statistical techniques, neural networks and artificial intelligence in a practical application in a particular company. Then you get all informa-
484
tion about these clients and they are analyzed in the present Enterprise Information System. As a result of this analysis, reports and graphs for each client are obtained. It has been found that this Enterprise Information System provides a complex filtering process to prevent clients who are inspected are not actually non-technical losses. In this sense, the Enterprise Information System provides a filter that removes usually between 20% and 70% of clients who otherwise would be a wasted expenditure, since in most cases they do not have non-technical losses. The reports of the Enterprise Information System allow the inspectors and researchers more information about the client and on the problems presented, as are just and necessary information, using graphic information on a temporary basis (consumption, etc.). Moreover, in these reports, the reasons why the client has been included in a given category can be found.
conclusIon This paper explores the expert system investigation in the utility consumption subject, and its combination with other technologies to enable the design and construction of an Enterprise Information System. Particularly, the research focuses on a very little exploited area: the automation of the available information analysis for anomalies or fraud classification on utility companies. In the Enterprise Information System discussed in this paper, the main source of complexity was the huge amount of knowledge required and the variety of problems that may occur in the client analysis. The main tasks of investigation, which made major contributions, focus on: •
Methodology design for classification systems with different type of available information.
EIS for Consumers Classification and Support Decision Making in a Power Utility Database
• • • • • •
•
Identification and classification of knowledge necessary. Identification and classification of the analysis cases of utility distribution client. Use of different information types in the analysis of clients. Using different integration techniques. Implementation of the Enterprise Information System. Design and construction of an Enterprise Information System that automates their adaptation to the different test samples. Provides a Decision Support System (DSS) making through analysis of reports and graphs.
reFerences Azadeh, A., Ghaderi, S. F., & Sohrabkhani, S. (2007). Forecasting electrical consumption by integration of Neural Network, time series and ANOVA. Applied Mathematics and Computation, 186, 1753–1761. doi:10.1016/j.amc.2006.08.094 Biscarri, F., Monedero, I., León, C., Guerrero, J. I., Biscarri, J., & Millán, R. (2008, June). A data mining method base don the variability of the customer consumption: A special application on electric utility companies. In Proceedings of the Tenth International Conference on Enterprise Information Systems, Volume Artificial Intelligence and Decision Support System (AIDSS) (pp. 370-374). Barcelona, Spain. Cabral, J., Pinto, J., Linares, K., & Pinto, A. (in press). Methodology for fraud detection using rough sets. IEEE International Conference on Granular Computing. Cabral, J., Pinto, J. O. P., Gontijo, E., & Filho, J. Reis (2004, October). Fraud Detection In Electrical Energy Consumers Using Rough Sets. In IEEE International Conference on System, Man and Cybernetics, (Vol. 4, pp. 3625-3629).
Daskalaki, S., Kopanas, I., Goudara, M., & Avouris, N. (2003). Data Mining for decision support on customer insolvency in telecommunications business. European Journal of Operational Research, 145, 239–255. doi:10.1016/ S0377-2217(02)00532-5 Galván, J. R., Elices, A., Muñoz, A., Czernichow, T., & Sanz-Bobi, M. A. (1998, November). System For Detection Of Abnormalities and Fraud In Customer Consumption. In 12th Conference on the Electric Power Supply Industry, Pattaya, Thailand. Gavrilas, M., Ciutea, I., & Tanasa, C. (2001, June). Medium-term load forecasting with artificial neural network models. CIRED2001, Conference Publication No. 482. Hand, J. D. (2001). Prospecting for gems in credit card data. IMA Journal of Management Mathematics, 12, 172–200. doi:10.1093/imaman/12.2.173 Hobbs, B. F., Helman, U., Jitprapaikulsarn, S., Konda, S., & Maratukulam, D. (1998). Artificial neural networks for short-term energy forecasting: accuracy and economic value. Neurocomputing, 23, 71–84. doi:10.1016/S0925-2312(98)00072-1 Kimball, R., & Ross, M. (2002). The Data Warehouse Toolkit: The Complete Guide to Dimensional Modeling (2nd ed.). New York: John Wiley & Sons Computer Publishing. Kou, Y., Lu, C., Sinvongwattana, S., & Huang, Y. (2004). Survey of Fraud Detection Techniques. In Proceedings of the 2004 IEEE Intenational Conference on Networking, Sensing & Control, (pp. 21-23), Taipei, Taiwan. Lau, H. C. W., Cheng, E. N. M., Lee, C. K. M., & Ho, G. T. S. (2008). A fuzzy logic approach to forecast energy consumption change in a manufacturing system. Expert Systems with Applications, 34, 1813–1824. doi:10.1016/j.eswa.2007.02.015
485
EIS for Consumers Classification and Support Decision Making in a Power Utility Database
Padmakumari, K., Mohandas, K. P., & Thiruvengadam, S. (1999). Long term distribution demand forecasting using neuro fuzzy computations. Electrical Power and Energy Systems, 21, 315–322. doi:10.1016/S0142-0615(98)00056-8 Shakouri, H., Nadimi, R., & Ghaderi, F. (2008). A hybrid TSK-FR model to study short-term variations of electricity demand versus the temperature changes. Expert Systems with Applications. doi:. doi:10.1016/j.eswa.2007.12.058 Szkuta, B. R., Sanabria, L. A., & Dillon, T. S. (1999, August). Electricity Price Short-Term Forecasting using artificial neural networks. IEEE Transactions on Power Systems, 14(3). doi:10.1109/59.780895 Wheeler, R., & Aitken, S. (2000). Multiple Algorithms for Fraud Detection. Review KnowledgeBased Systems, 13, 93–99. doi:10.1016/S09507051(00)00050-2
key terms And deFInItIons Client Facilities: This represents from the finish of utility line distribution to client home. Normally, in this facilities, there are measure and control equipment for client consumption. Company Commercial System: The company needs a database with the client information. This database has an interface for interaction with user. Data Mining: It is a set of techniques based initially on statistical techniques. Actually, the present techniques are based on other investigations related to artificial intelligence. The main objective of data mining is the automatic pattern extraction from data.
Data Warehouse: This technology proposes another way to organize the information in databases. The information is specially oriented to the subject related with the problem. This idea includes information redundancy but accelerates the queries. Datamart: It is small image of large database. This image is specially oriented for one specific objective. Normally, the datamart is made with data warehouse technology. Expert Systems: The expert systems are programs that it is used for specific problem solution. These systems are made with the knowledge of experts implemented in knowledge base. Inspector: Normally, he (she) represents a person that is in company staff. He (she) is a technician that visit the client facilities and is capable of manipulate the measure equipment and has the company authorization. Low Tension: Normally it is the set of clients with tension less than 1000 Volts. MODEM: MOdulator-DEModulator. This electronic device is used for transmission of information by means of modulation of signal. Rule Based Expert Systems: Expert systems whose have the knowledge base implemented by a set of rules. Text Mining: It is a technology born under Data Mining. Recently, this technology became a great investigation field. The main objective of text mining is the automatic patter extraction from unstructured information. The Text Mining techniques usually use natural language processing (NLP) for categorizing the information of unstructured information.
This work was previously published in Enterprise Information Systems and Implementing IT Infrastructures: Challenges and Issues, edited by S. Parthasarathy, pp. 103-118, copyright 2010 by Information Science Reference (an imprint of IGI Global).
486
487
Chapter 2.10
An ERP Adoption Model for Midsize Businesses Fahd Alizai Victoria University, Australia Stephen Burgess Victoria University, Australia
AbstrAct
IntroductIon
This chapter theorizes the development of a conceptual ERP adoption model, applicable to midsize businesses. The general business factors associated with ERP implementation along with the corresponding organisational benefits are identified. This chapter also highlights the constraints that confront midsize businesses whilst implementing sophisticated applications. The needs for ERP adoption can occur due to an attempt to be more competitive or due to an external pressure from large businesses to adopt an ERP application. The proposed conceptual model uses a strategic approach containing; ERP implementation processes, stages, factors & issues associate with ERP adoption in midsize businesses. This research also focuses on identification of strategies in the organisational, people and technical domains that could be influential for ERP adoption.
The importance of midsize businesses has been recognised in recent decades due to their role of creating jobs, enhancement of global economic activity and most importantly higher growth rate, regardless of their size (Rovere, L & Lebre, R (1996); Acs 1990). To increase their production capabilities, they should be vigilant towards adoption of the latest technology (Barad, M & Gien, D. 2001) as use of Information Technology (IT) could result in an increase in innovative activities, resulting in improved productivity and efficiency in business operations (Correa 1994). Therefore, it is appropriate for midsize businesses to utilize their resources and adopt means of automated data transfer both internally and externally (Caillaud 2001). Business applications such as ERP systems could provide a better way to execute business operations in an effective, organised and sophisticated way.
DOI: 10.4018/978-1-60566-892-5.ch009
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
An ERP Adoption Model for Midsize Businesses
The adoption of ERP applications in a modern day organisation has been described as being one of the most innovative developments associated with the IT sector (Al-Mashari 2002). ERP systems can be viewed as sophisticated business applications that integrate major functions of different departments (Koch 2003) as its modules allow organization to improve the functionality of its business processes (Chung 1999). Hence, ERP software modules have the ability; once implemented, to integrate major activities across the organisational departments using one integrated software solution (Koch 2003). Amoako-Gyampah (2007) suggests that ERP systems are integrated software developed to handle multiple corporate functions, allowing companies to synchronise activities, eliminate multiple data sources with provisioning of accurate and timely information, obtaining better communication among different units to meet expectations and reducing cost required to manage incompatible legacy systems. In effect, this can greatly assist organisations to carry out their operations in more effective and efficient ways and allow the workforce to interact and collaborate in an information-enabled environment. ERP systems have been developed in the last two decades to replace the common legacy and Material Requirement Planning (MRP) systems that have traditionally been associated with larger enterprises. As the ERP market has evolved and matured, so have the related hardware and infrastructure technology. The cost of ERP solutions have been reduced to the point where it has now become viable for the midsize business sector to consider ERP implementation (Aberdeen 2006). Arguably, the implementation of ERP systems in midsize businesses could be viewed simply from the perspective of applying the success factors already identified for larger businesses to a different set of smaller entities. However, midsize businesses are unlike their larger business counterparts. They have a diverse range of separate adoption issues that need to be considered when it comes to ERP— issues such as limited finance
488
availability, technology understanding and human resources constraints (Rao 2000). Assuming the pending shift of ERP adoption to smaller-sized business entities, this chapter examines the relevant literature on ERP implementation as well as highlighting the characteristics of midsize businesses to propose an adoption model for implementing ERP systems in that business sector.
bAckGround erP systems ERP applications were built primarily to integrate different department functions and business processes to form a collaborative view of business operations in a single IT architecture (Klaus, H., Rosemann, M, Gable, G. 2000). Modern day ERP applications are business process centric, evolved to address diverse aspects of corporate business requirements. One aspect of this evolution has been the ability of ERP systems to be a replacement for IT legacy systems that were developed in different functional areas of the business. Another aspect of ERP success has been the importance of such systems to integrate the supply chain so as to facilitate information flows across all business areas— in effect allowing the large corporation to be managed in real time (Turban 2006). The manner in which ERP has applied industry standards to organisational business processes has also been recognised as a significant ERP feature (Keller & Teufel 1998)— allowing a corporation to espouse enterprise wide best practices. Given the evolving nature of ERP systems, there are different point-of-views on how to explore ERP implementations. One view is to focus on ERP as a product or commodity in terms of software application (Klaus, H., Rosemann, M, Gable, G. 2000) where ERP modules are integrators of all business processes and data under one inclusive umbrella. ERP systems are equipped
An ERP Adoption Model for Midsize Businesses
with features that embrace costing, finance, sales, contact management, customer relationship management and human resources issues (Rooney et al. 2000). Arguably, each application area becomes a central focus in order to understand and facilitate the ERP implementation process. It is important to note that getting a desired outcome from ERP applications can be difficult task due to related constraints involved in its implementation and customisation (Marnewick, C. & Labuschagne, L. 2005). Therefore, it is advisable to focus on long term business objectives associated with ERP implementation to understand the complex nature of integration processes (Boubekri 2001). A strategic approach could also be a suitable by addressing business needs with respect to the organisational, technical and people (human) aspects of ERP implementation.
Some Benefits of ERP Turban et al. (2006) highlighted the internal and external nature of systems integration associated with introduction of an ERP system. The internal integration of systems allows different functional areas of business to be presented under a ‘single umbrella’ leading to many operational benefits. External integration benefits promote crosscollaboration and data exchanges between a firm’s allied partners, enhancing significant business-tobusiness (B2B) information exchanges as well as improving partner relationship management (PRM). ERP systems standardize business operations and automate business functions such as production, planning, manufacturing, purchasing, marketing and human resources into number of operational modules. These modules are integrated with each other, forming a relationship chain and can provide significant benefits across the enterprise (Boubekri, 2001). Indeed the selection of an ERP module is related to the factors such as business attributes, specific operational needs and the characteristics of the company. The ability of
ERP systems to integrate business functions does provide significant tangible and intangible benefits (Sandoe, K. Corbitt G. Boykin R. 2001). The tangible benefits include reduction in employee numbers and inventory stock; improvements in productivity, order management and timely deliveries – all of which can lead to increased profitability. The intangible benefits are associated with new/improved business processes; information supply chain visibility; process standardization and enhanced globalization opportunities. ERP systems also allow an organization to become more ‘customer centric’, where more accurate/up to date information about customers results in enhanced customer services (Rao 2000). A study conducted by Kennerley et al. (2001) identifies benefits and shortcomings of ERP systems implementation in an organisation. The benefits across four distinct areas were evaluated relating to; corporate organisation, operational plant, functional divisions and the individual employees. Kennerley et al. (2001) alludes to the benefits of ERP systems implementation being: • • • • •
Improved efficiency and control An ability to rationalize inventories Enhanced cross border capacity and optimization Increased leverage opportunities with suppliers Improved resource and management planning
Keller & Teufel (1998) describes the standardization imposed on business processes as being another benefit of ERP system implementation. Arguably, standardization may come at the expense of business process flexibility, however business process standardization allows industry best practice to be adopted by a business with the commensurate benefits. The level of standardization resulting from the adoption of best practice standards set by ERP packages might increase concerns regarding competitive advantage. For
489
An ERP Adoption Model for Midsize Businesses
successful implementations, existing business knowledge must be translated into application knowledge by mapping existing business processes with ERP package embedded processes and defining new processes that should fit with both new system and organisational needs (Vandaie 2008).
ERP System Implementation A conceptual model for ERP implementation (the ‘4p’ model) was proposed by (Marnewick, C. & Labuschagne, L. 2005), addressing four fundamental aspects of implementation. The model derived its structure from the well-known marketing ‘4Ps’ (people, product, process and performance). The ERP areas associated with 4Ps entities are; • • • •
People as the customers that represent organisational requirements/mindset, Product as software modules that are to be implemented across the business, Process as representing the project’s change management issues Performance that is analogous to data flows associated with business process.
Every aspect in this model has a direct or indirect impact on ERP implementation processes. This includes identification of organizational requirements, customization of selected software, the installation and subsequent operations and finally the important needs of system training for personnel. All various proposed levels are important for ERP system adoption, allowing organisations to progress through implementation processes that require all relevant factors to be considered (Marnewick, C. & Labuschagne, L. 2005). Another approach for ERP implementation has been alluded to by Al-Mashari (2002) who suggests that an intense effort is required to highlight business and technological requirements of a com-
490
pany before ERP systems should be implemented. Al-Mashari (2000) also indicates that successful ERP implementation is directly related to organizational preparedness. Success could be defined as a favourable results or satisfactory outcomes in accordance with user expectations. Outcomes of ERP system projects could be evaluated on the basis of different factors, such as technical, effectiveness and user experience related factors (Wei 2008). According to Rao (2000) there is a certain level of competence that should be achieved that reflects organizational preparedness when it comes to ERP system adoption— these levels related to areas associated with technical, human and management aspects of the organisation. A number of other approaches for ERP system adoption exist. Wilhelm et al., (2000) indicated certain traditional information systems modelling methods that could be used to reduce the persistent cost of ongoing ERP implementation. As ERP is defined as integrated business software, the modelling required for ERP implementation should detail the aspects relating to all abstraction layers in integration management. The prime objective should be progressing from upper to lower abstraction levels such as enterprise modelling to final coding with complete existing business process information (Monnerat, Carvalho and Campos 2008). Edward et al, (2003), (drawing from the work of Esteves & Pastor (1999)) uses the system life cycle model to explain six different stages of ERP systems adoption— such as adoption, decision-making, acquisition, implementation, use & maintenance, evolution and retirement. ERP has become a strategic survival instrument for businesses using information technology to conduct their operations. ERP implementation requires a huge investment and greater initiative towards engaging resources such as time, money and people (Yang, Wu and Tsai 2007). The use of multi-factor business strategies (as identified) has been suggested as a suitable approach for adoption or upgrade of an ERP system. According to Aladwani (2001), a firm needs to identify
An ERP Adoption Model for Midsize Businesses
the various organizational, technical and people strategies that could be used with the introduction of ERP systems. The organizational strategies include proper project management, recognition of organizational structure and business ideology, change strategy development and deployment, appropriate managerial style and available communication mechanisms. The technical strategies address the technology challenges of ERP installation and include gaining a thorough understanding of systems configuration, hardware complexity, the capabilities of technical staff to handle pending challenges; and access to sufficient resources (time and cost associated factors). The people strategies associate with ERP systems implementation include the ability to identify and manage staff attitudes towards change, inclusion and involvement of all staff in the implementation process as well as an appropriate ERP training regime. Aladwani (2001) suggest that these strategies have significant importance in the ERP implementation process with adherence and use of these strategies reducing the likelihood of project failure.
ERP in Midsize Business ERP systems have been historically associated with implementation projects in large businesses. However; there has been a recent trend for midsize business to also consider adopting ERP systems. In this research an organization with 200-500 employees and/or an annual turnover of less than US$75 million is defined as a midsize business (Gefen et al. 2005, Yates 2004, APEC 2003, Duxbury et al. 2002). Midsize businesses are considered to be the backbone of a country’s economy and play a vital role in economic development. They create job opportunities, accelerate economic revival and support industry to boost up economic progress (Pramukti 2003). Midsize businesses are also vulnerable and exposed to threats due to their size and operability (Sarbutts 2003). The risks associated with midsize businesses could be related to the
availability of adequate resources such as time, money and skills to run business operations (Barad et. al 2001). For instance, the literature suggests that the following affects the decision making process of introducing latest IT applications in midsize businesses: lack of resources, availability of accurate information, lack of skilled labour and management’s ability to adopt new change (Rovere, L. & Lebre, R. 1996). An ERP implementation with a more strategic focus should be of greater importance to senior management that has firm control over its IT operations. This would enhance management’s supervision of ERP, resulting in better performance (along with improved operational and strategic control) (Ragowsky and Gefen 2008). Another more strategic issue faced by midsize businesses include their continuous growth requirement and consistency to update their technological level to meet with the existing technology standards (Rauof, 1998). The future growth of midsize businesses depends upon the use of advanced technologies for enhancement of their production capabilities. Use of the latest technology can help enhance the production capabilities by producing good quality products at cheaper cost and efficient delivery to its customers (Barad et. al 2001). It has been suggested that information technology in general has created opportunities for midsize businesses to be more competitive in the marketplace (Rovere, L & Lebre, R (1996). However, midsize enterprises, because of their limited available resources can find it difficult to improve IT support services (such as increasing the number of educated IT professionals on staff and/or expanding their IT departments). In the midsize business arena several internal and external factors also can govern technology adoption behaviour. Kennerley et al. (2001) identified internal factors such as lack of training and insufficient information/documentation about IT systems as being problematic; external factors were associated with the level of support provided by implementation professionals and also the
491
An ERP Adoption Model for Midsize Businesses
nature of on-going technology upgrades. According to Rao (2000), an ERP solution is expensive and some midsize companies may not be able to afford them. Given this observation, Rao (2000) further indicates that information integration can be a major motivational factor for midsize businesses to implement ERP systems, allowing them to approach a level of business flexibility similar to large enterprises. There is some suggestion that smaller type businesses are unaware of the advantages of ERP technology and how the technology has become necessary for global interaction— an issue that, if not addressed, may eventually push these businesses out of the market (BRW, 2002). In terms of successful ERP adoption, Lee (2000) highlights the concerns of small manufacturers, finding that the benefits associated with ERP software had yet to be derived. According to Alison (2002), ERP systems users are not technological experts and ERP software tends to be less than user-friendly because of challenging interfaces— these findings potentially posing a significant user training issue for resource limited midsize businesses. With respect to ERP systems adoption several criteria have been proposed for small and midsize business to select and implement an appropriate ERP system solution that includes affordability, supplier knowledge, local support, technical upgradeability and the availability of the latest technology (Rao 2000). According to Saccomano (2003) the initial target market for ERP vendors was big companies that could afford solutions costing millions of dollars at project start up. In recent times, many multinational companies have restricted their operations to partnering only those midsize companies that are using compatible ERP software. Hence, it becomes essential for many midsize companies to adjust their business model and adopt ERP software that is compatible with the large enterprises with which they deal (Rao 2000). Thus, midsize enterprises are increasingly finding themselves attracted to ERP solutions and their associated benefits. Additionally, ERP systems are
492
becoming a necessity in order to maintain relations with larger enterprises (Rao 2000).
section summary Enterprise Resource Planning (ERP) has changed the way of doing business by re-engineering and redesigning business processes in accordance with standardised business operations. The literature highlights the main objective of having an ERP application implemented is to obtain and facilitate best practice in the business operations. Evidence also suggests that midsize business play a vital role toward collective productivity of a nation. Midsize Business often lacks leadership, strategic vision and they mainly focus on day to day operations. Implementing a new information system in a midsize business can be a cumbersome process as there is not much information available for these businesses to decide what could be a better solution for them. Midsize business are also tend to be influenced by number of factors while selecting an information systems; these could be relate to lack of resources such as knowledge and skill, availability of time and money. Midsize businesses have also adopted a cautions approach towards ERP applications due to the lack of information with strategic direction and associated risks involved in ERP implementation.
erP ImPlementAtIon Issues And mIdsIZe busIness There is considerable amount of evidence suggesting that companies face problems while implementing ERP applications. Millions of dollars are spent every year to purchase and implement ERP products with problems relating to customisation and resulting in over budgeted delayed implementation (Martin 1998). The nature of problems faced during ERP implementation is quite abnormal comparative to implementation of other IT products (Parr et. al. 2000). Some of the
An ERP Adoption Model for Midsize Businesses
important aspects in relation to ERP implementation are discussed below;
erP Adoption Mostly large businesses have already adopted ERP applications to meet with their growing business needs. Midsize businesses found themselves attracted to these applications due to their cost effectiveness and collaborative requirements to do business with larger enterprises (Klaus, H., Rosemann, M, Gable, G. 2000). Some of the growth factors for ERP in midmarket bracket (midsize businesses) includes; continuous industrialisation and its reliance on small & midsize business, adoption of new technologies such as client server and availability of small & medium business centric ERP applications and so forth (Rao 2000). There is a general understanding that ERP implementation is an expensive process and midsize businesses cannot afford it but this does not mean midsize businesses do not need to have ERP applications. Information integration could be one of the major triggering points for midsize businesses to implement ERP applications and achieve high levels of business flexibility with their larger counterparts (Rao 2000). Another important aspect of ERP implementation is to understand business needs and customise ERP products to mould application(s) according to existing business processes or altering business function in accordance with ERP standards. Research indicates that customisation in ERP application increases the risk of failure and cost of the project significantly increases comparative to none customised implementation (Wilhelm et al. 2000). Higher levels of dissatisfaction amongst ERP application users have been observed due to customisation and BPR (business process reengineering) related issues, impacting mainly on cost and duration of the project. ERP vendors have also admitted that generally a customer spends more to implement than to buy the software itself (Wilhelm et al. 2000).
ERP Vendors & Midsize Businesses The benefits from ERP applications have been realised by small manufacturing concerns (Lee 2000). As indicated earlier, the initial target market for ERP vendors was large enterprises that could afford to implement a costly application. Later, when large enterprise market for ERP systems dried up by the year 2000 (Saccomano 2003) ERP vendors started focusing on the mid-market bracket. ERP application developing companies such as, SAP and PeopleSoft have sought to increase their market share in the mid-market bracket by boosting their offers and developing business specific applications. PeopleSoft have also offered database, storage and hardware bind options along with customer support features and introducing self service portal with features to access system availability, account, billing and invoicing visibility (Ferguson, et al. 2004). ERP companies are offering extensive support with the help of their service partners (outsource companies) in relation to business application strategy, implementation integration and optimisation services and so forth. Small and midsize business centric packaging is another strategy adopted by some of the ERP vendors to capture major market share such as, SAP Business by Design; an on demand business solution for midsize businesses.
ERP Implementation in Midsize Business It is not necessary that small & midsize businesses should go for ‘high brand’and costly ERP products. They could consider other cheaper alternate ERP solutions that could serve their business needs (Lee 2000). It is also important to note that selecting an appropriate solution for small and midsize businesses can be difficult; depending upon their existing information technology management and business needs (Wilhelm et al. 2000). Rao (2000) presented criteria for midsize businesses to select an appropriate ERP application consisting of following five points;
493
An ERP Adoption Model for Midsize Businesses
•
•
•
•
•
Product affordability: A decision should be made according to the affordability of a product and its price. Knowledge about supplier: An experienced supplier should be selected with a deep understanding of ERP implementation issues. Domestic support: ERP applications are highly sophisticated requiring greater degree of hands on knowledge and expertise; it would be beneficial to choose a supplier who provides domestic/local support. Technically upgradeable: Product with upgradeability features should be selected that will allow the company to upgrade applications with changes in technology. A contract should be established with the vendor to provide annual upgrade software support. Latest Technology: An easily implementable product should be selected with user friendly interface with capability to adopt any future modifications. It would be better if product is designed on object oriented technology and GUI interface.
According to Wilhelm et al., (2000), ERP affordability could be increased by reducing the cost of implementation and increasing the user acceptance. Certain modelling approaches could be used to reduce the cost of implementation such as; (Wilhelm et al., 2000) • • •
•
494
Use a reference model to select best practice case for implementation. Modelling techniques to be used while documenting requirement definition details. To make the business logic more understandable, the system requirements should be documented with help of conceptual modelling methods. A conceptual model should be used as starting point for system automation, configuration and customisation (if required).
There are some strategies suggested by Aladwani (2001) for improving ERP implementation processes. These strategies could be categorized as; •
•
•
Organisational strategies comprising of project management, organizational structure, change strategy development and deployment, managerial style, ideology, communication and so forth. Technical strategies contain technical aspects of ERP implementation such as, ERP installation, configuration, complexity and capable technical staff to handle the complexities, time - cost factors and so forth. People strategies including management and staff related issues towards change management, training and level of staff engagement in a project.
As mentioned earlier, Marnewick et al., (2005) ERP conceptual model that comprises of four components. These components are derived from marketing 4P’s model (People, Product, Process, and Performance) and maps on ERP components; People as Customers mindset, Product as Software, Process as Change Management and Performance as Process flow. Change management strategies are vital to promote steps necessary for adaptability to change. Therefore, it is important to identify factors that influence ERP user acceptance (Bueno and Salmeron 2008). Similarly, every other component also has an impact on ERP implementation process (direct or indirect), starting from identification of organisational requirements to customisation of software, installation to make software operational and training for successful adoption. These levels have significant importance and every organisation has to go through them.
An ERP Adoption Model for Midsize Businesses
Barriers to Implement ERP in Midsize Business The adaptability of ERP applications is one of the major constraints faced by businesses implementing ERP applications. Mostly ERP users are not their experts and they do not desire to be due to the complex and non user friendly nature of these applications (Alison 2002). To increase the adaptability of ERP applications, vendors introduced different integration techniques such as Enterprise Application Integration (EAI) and also improved application design and modules to facilitate implementation handling capabilities of business, regardless of its size (Alison, 2002). Gable (1999) identified some of the potential barriers that could cause implementation hazards to midsize businesses; • •
•
•
Lack of resources and less control over business operations. Managers/owner might be more influential towards strategic policy making issues and could make a biased choice The decision maker’s background could be less or none technical and could result in lesser understanding of technology and its implications on the business Business might try to resolve sophisticated technical issue with less technological understanding.
Aladwani (2001) indicated some crucial issues in relation to ERP implementation that includes mainly the possible resistance from staff toward adaptability of the product. If staff considers ERP applications as threat to their job, they will develop negative attitude towards it. The ERP literature does not provide sufficient help to cope with this problem and it should be considered as a major threat to ERP implementation. To overcome the possible resistance (to change) problem, management could engage and communicate with its employees in a more effective
manner. Communication strategies could be useful to educate prospective users about benefits of ERP applications. In many cases ERP projects fails due to lack of communication and if problems are addressed appropriately, positive benefits could be entertained (Al-Mashari et. al., 2000).
erP Implementation models Some researchers have categorised ERP implementation into stages and tried to standardize the processes for successful implementation. Bancroft (1996), Ross (1998), Markus and Tanis (2000) and Parr et al. (2000) proposed models of ERP implementation to obtain much deeper understanding of implementation processes and the purposed models could be used as an initiating point to create a similar model for midsize business. 1.
Bancroft et al., (1998) developed a model as result of a comprehensive study carried out on ERP implementation in three anonymous multinational companies and with consultations of 20 ERP practitioners. This model consists of five phases including; four pre-implementation phases (‘focus’, ‘as is’, ‘to be’, ‘construction and testing’) and one actual implementation phase (‘go live’). This model covers all major ERP implementation activities, starting from ‘focus’ to ‘go live’ and is briefly described below; ◦ The Planning (focus) phase consists of initial project activities such as, formation of steering committee, project team selection, project guide development and project plan creation. ◦ The Analysis (as is) phase consists of business process analysis, initial ERP system installation, business process mapping on ERP functions and project team training etc. ◦ The Design (to be) phase includes, high level and detailed designing for
495
An ERP Adoption Model for Midsize Businesses
2.
3.
496
user acceptance, interactive prototyping with constant communication with ERP users. ◦ The Construction (construction & testing) phase consists of comprehensive configuration development, population of real data in test instance, interfaces building and testing, creation and testing of reports, system and user testing. ◦ The Actual implementation phase (go live) includes, network building, installation of desktops and organising the user training and support. Ross (1998) presented another model after analysing ERP implementations at 15 case study large organisations. This model comprises of five phases; Design, implementation, stabilization, continuous improvement and transformation. ◦ The Design phase (which could be rephrased as planning) includes critical guidelines and decisions made towards ERP implementation. ◦ The Implementation phase includes several phases of Bancroft et al.’s (1998) model such as; ‘as is’, ‘to be’, ‘construction & testing’ and actual implementation (‘go live’). ◦ The Stabilisation phase comes after cut-over (final sign off) and if problems identified are fixed, consequently improves the organisational performance. ◦ The Continuous improvement phase includes any functionality added to the system. ◦ Finally, the Transformation phase covers achievement of maximum system flexibility up to organisational boundaries (system’s operability on every organisational level). Markus et al’s (2000) theory concentrates on sequences of activities that lead to successful
4.
implementation of ERP systems in large size businesses. Markus et al. (2000) specified four major phases in the implementation life cycle; Chartering, Project, Shakedown and Onwards & upwards. ◦ The Chartering phase starts before Bancroft et al.’s (1998) focus and Ross’ (1998) design phases. It comprises of decisions that lead to financial approval of an ERP project and it includes development of a business case, package selection, identification for the project team, budget and schedule approval and so on. ◦ The Project phase is similar to Ross’ implementation phase and it covers all Bancroft’s model phases except focus (‘as is’, ‘to be’, ‘construction & testing’ and ‘actual implementation’ phases). In this phase system configuration and rollout occurs and major activities such as, software configuration, system integration, testing, data conversion, training and roll-out takes place. ◦ The Shakedown phase refers to the period when system is beginning to operate normally by removing all glitches and implementing standards. ◦ The Onward and upwards phase is a combination of Ross’ (1998) continuous improvement and stabilization phases. This phase refers to continuing maintenance, user support, upgrade or enhancements required by ERP system and focuses on any further system extensions. Parr et al. (2000) Project Phase model (PPM) synthesizes previous models (Bancroft et al. (1998), Ross (1998), Markus and Tanis (2000)) and includes the planning and post implementation stages. The focus of this model is on project implementation & factors that influence a successful implementation at
An ERP Adoption Model for Midsize Businesses
each phase. Parr et al. (2000) indicated that for an organisation it is important to have significant amount of knowledge regarding unsuccessful projects and an experienced “Champion” should be appointed with well defined responsibilities. One large project should be partitioned into several sub-projects that can be identified as vanilla implementation. The PPM model consists of three major phases; planning, project and enhancement. ◦ The Planning phase comprises of selecting an ERP application, formation of steering committee, project scope determination and broad implementation approach, selection of project team and determination of resources. ◦ The Project phase includes a range of activities from identification of ERP modules to installation and cut-over. As the prime focus of this model is on implementation, therefore, this phase has been divided into five sub-phases: set-up, re-engineering, design, configuration and testing, installation. ▪ Setup comprises of project team selection and structuring with suitable mix of technical and business expertise. The team(s) integration and reporting processes are established and guiding principles are established or re-affirmed. ▪ Re-engineering comprises of analysis of current business process and to determine the level of process engineering required. This phase also includes, installation of ERP application, mapping of business processes on ERP functions and training the project teams. ▪ The Design sub-phase includes high level designing with
◦
additional details for user acceptance. It also includes interactive prototyping through constant communication with users. ▪ The Configuration & testing sub-phase includes, development of comprehensive configuration, real data population in test instance, building and testing interfaces, writing and testing reports and finally system and user testing. ▪ The Installation sub-phase includes building networks, installation of desktops & managing user training and support. (Last four sub-phases are similar to the phases described in Bancroft et al. (1998) model) The Enhancement phase comprises of stages of system repair, extension and transformation and it may extend over number of years. This phase encapsulates the Ross (1998)’s continuous improvement and stabilization phases and Markus et al. (2000) onwards and upwards phases.
section summary ERP implementation has been described as unique and different from other software implementations due to its strategic impact over business. There are number of attempts made to produce an effective model, providing an appropriate strategic direction for large enterprises while implementing sophisticated business applications. Based on existing research work, sequence of events outlined as stages could be represented as follows, (See Figure 1)
497
An ERP Adoption Model for Midsize Businesses
Figure 1. Sequence of events outlined as stages
solutIons And recommendAtIons It cannot be assumed that ERP implementation by midsize business can directly use the existing ERP implementation frameworks that have traditionally been used during application assessment, implementation and evaluation processes in large enterprises (Rao 2000). The adoption of technology by midsize businesses tends to be influenced by a number of associated factors. These factors could be summarized as a lack of experience in adopting new technology and its implementation, access to decision making information and availability of general resources (i.e. skill, time & money). Midsize businesses also face a number of other challenges during ERP implementation such as, selection of an effective IT solution, cost of implementation and customisation, staff training, business process standardisation and
498
post production application maintenance (Barad et. al 2001, Rao 2000, Gable 1999, Rovere et. al 1996). Thus, the midsize business environment (with its limitations) is an important governing aspect of research that is associated with ERP adoption and needs to be part of a conceptual working model. Companies adopt precautionary measures while implementing ERP applications and attempt to mitigate associated risks. It is suggested that midsize businesses should adopt an implementable strategy with proper planning and should resolve related problems to increase project success rate. Taylor (1999) discussed solution to nine challenges faced by small and midsize business while implementing an ERP application; • •
Scalable software that could meet future growth requirements. Finding a best way to implement solutions with minimum cost.
An ERP Adoption Model for Midsize Businesses
• • •
•
•
• •
Realistic and achievable expectations from the application and implementation. The correct level of resources should be allocated to achieve maximum outcomes. Reduce possible staff resistance by overcoming the fear of change by consistent communication and staff engagement. Mapping out key business processes to a negotiable point where software could be implemented easily. Data conversation (data reformation from one application to another) should be performed appropriately. Avoid taking short cuts or quick fixes. Technical and hands on training must be provided.
Different research approaches have also been used to examine and identify factors that are critical for successful implementation of ERP applications. For example, ERP implementation models (Bancroft (1996), Ross (1998), Markus and Tanis (1999), Parr et al. (2000)) identify factors associated with ERP implementation stages and the degree of importance of each factor to every implementation stage, as do the traditional system approach (Edward et al 2003) and marketing derived 4Ps model (Marnewick, C. & Labuschagne, L. 2005). Arguably these methods are reliant on resource intense activities that are necessary in larger and change resistant organizations. Another approach to ERP research is to focus on business strategies that allow an understanding of ERP implementation as business progresses from one implementation stage into another (Aladwani 2001). The three core strategies (organisational, technical and people strategies) could be crucial for an organization to adopt ERP application and indeed these three are tangibly identifiable within the midsize business environment and less problematical to investigate if the study was to focus purely on business processes. Clearly, there is a difference between issues that need to be considered while examining ERP
adoption by midsize businesses and their large business counterparts. It is important to adopt a collaborative approach based upon existing research work to provide a road map (in form of ERP implementation models for large scale businesses) and also the strategic approach across organizational, technical and people domains and their resource limitations for midsize businesses. This forms a strong base for the proposed ERP adoption model described below.
An erP Adoption model for midsize business A need to investigate ERP implementation issues in relation to their applicability in midsize business is apparent. The literature outlines implementation models and strategies for large enterprises to have successful ERP implementation. This existing knowledge base could be beneficial to develop a strategic ERP adoption model for midsize businesses that should provide a workable solution for their ERP implementations. In the past, much of ERP research was described as ‘factor research’ that mainly focused upon identifying factors or variables critical to ERP implementation. More recent research focus has been on processes that helps understand ‘how’ an implementation takes place (Aladwani, 2001). To take advantage of both perspectives, it is important to focus on an integrated approach to have a better understanding about issues relating to ERP implementation. The link between factors and stages is crucial to analyse the importance of different factors with the change in each stage during ERP implementation (Markus et al. 2000). This will help to assess what factors are affecting which process during certain periods of time and what impact is seen on the process itself. Parr et al’s (2000) Project phase model and Markus et. als’ (2000) process theory are useful tools to conduct the factor impact analysis while developing an ERP adoption model for midsize businesses.
499
An ERP Adoption Model for Midsize Businesses
The major focus of this research is to develop an ERP adoption model for midsize businesses by critically evaluating the strategic factors and issues with respect to different stages of implementation. Given the various resource limitations associated with midsize businesses and the potential challenges of ERP systems adoption, this study is important in focusing on specific business sector (midsize) as a basis for proposing a model. The resultant model will contribute to an increased understanding of implementation processes, factors, strategies and issues in relation to midsize business, enabling them to determine appropriate solution in accordance with their operational needs. Figure 2 provides a ‘bird’s eye’ view of the complex relationship that exists between the project implementation phases and strategies (organisational, people, technical) with issues relating to midsize business. This model is developed by identifying the ERP implementation stages; defined in Parr et al.’s (2000) Project Phase model (PPM) (also presented byBancroft et al. (1998),Ross (1998), andMarkus and Tanis (2000)ERP implementation models) and the three major strategies impacting ERP implementation including organisational, Figure 2. ERP adoption model for midsize businesses
500
technical and people strategies identified by Aldwani (2001). It also includes the midsize business specific issues (Barad et. al 2001, Rao 2000, Gable 1999, Rovere et. al 1996) identification and their management to mitigate any risks associated with them. Thus this model adopts an integrated approach of identifying factors critical to ERP implementation along with the processes crucial to every stage of ERP implementation. Table 1 provides a detailed view of issues in relation to implementation stages and strategies during the implementation process. The model is designed to identify key factors associated with ERP implementation processes. The intent is to adopt a best practice theoretical base approach by encapsulating existing literature to propose a strategic ERP adoption model specifically designed to facilitate the needs of midsize businesses. There is an enormous amount of research that has been conducted in relation to ERP implementation in large enterprises; that is used to help identify many of the ERP implementation issues faced by midsize businesses. However, we are also introducing those factors that relate specifically to midsize businesses because they are different from large size businesses. Midsize businesses are strategically fragile and economically less stable with limited operability.
An ERP Adoption Model for Midsize Businesses
Table 1. ERP adoption model for midsize businesses – Detailed diagram Stages Bancroft et al. (1998), Parr et al. (2000)
Factors Activities Bancroft et al. (1998), Parr et al. (2000)
Organisational Aladwani (2001)
People Aladwani (2001)
Technical Aladwani (2001)
* Business & Technology Issues; * Strategic Management Issues; * Criteria of Selecting an IS;
* Change strategies development; * Risk Management;
Pre-Planning
Midsize Business (Barad et. al 2001, Rao, 2000, Gable 1999, Rovere 1996)
Planning
* ERP Application Selection, * Project Scope determination, * Project team selection, * Resource determination
* Change strategies development; * Project management; * Risk Management;
* Training strategies; * Change Management;
* Time & Cost of implementation
* Accurate Information; * Limited Resources (Time, Budget);
Setup & Re-engineer
* Team Structure & integration, * Guiding principles, * Business process analysis, * Installation of ERP app, * BP mapping, * Team training
* Organizational resources, * Organisational structure; * Managerial style; * Organisational Ideology;
* Staff attitude to change; * Management attitude;
* ERP complexity; * In house expertise; * Cost of implementation
* Limited Resources (Budget, Skill);
System Design
* High level designing * Additional details for user acceptance * Interactive prototyping * User Communication
* Organizational resources; * Communication & * Coordination; * Risk Monitoring;
* Staff Involvement;
* ERP complexity; * In house expertise; * Cost of implementation
* Business & Technology Issues;
Configuration & Testing
* Comprehensive configuration, * Real time data in Test instance, * Build test interfaces, * Write & test reports, * System & User testing
* Information System Function * Communication & * Coordination;
* Staff Involvement;
* ERP installation aspects; * In house expertise; * Cost of implementation
* Limited Resources (Budget, Skill);
Installation & Go live
* Building Network, * Desktop installation, * User training, * System Support
* Change strategies (Update); * Risk Management (Update);
* Staff attitude to change (Update); * Management attitude (Update);
* ERP implementation issues (Update);
* Business & Technology Issues (Update); * Strategic Management Issues (Update);
Now we shall discuss the proposed composition of the model and the importance of the strategic mixture being proposed.
the model The model is divided into two major dimensions, ERP implementation stages and factors impacting implementation, which are represented as a matrix in Table 1. The objective is to underline
the interrelationship between these two modules and to suggest activities, strategies and tasks to execute the project efficiently. Midsize businesses often lack leadership and strategic vision and they mainly focus on day to day operations. Midsize businesses also tend to be influenced by number of factors while selecting an information system and are often limited by their lack of knowledge and skill. Our model will provide midsize business a broader picture of issues that they could encounter
501
An ERP Adoption Model for Midsize Businesses
during the ERP implementation processes and will assist them to have a controlled implementation.
ERP Implementation Stages As shown earlier, different researchers have identified planning, set up, engineering, system design, configuration, testing and installation as separate stages but we have consolidated these to align in accordance with midsize implementation. Markus et al. (2000) identified ‘chartering’ as a crucial stage that contains decision making processes leading up to selection of an ERP application. This has been reflected as a pre-planning phase in our model to highlight the need for activities that are important for decision making processes leading up to selection of a suitable ERP application. 1.
2.
502
Pre-Planning: It is important for midsize businesses to perform comprehensive preplanning analysis of their existing financial and operational performance indicators. At the organisational level; strategic planning for projects becomes vital when risks are high and resources are limited. All important decisions leading to financial approval, the development of a business case, gathering appropriate business, technical & architectural information should be obtained and shared with the appropriate people for an informed decision. Midsize businesses should assess the operational significance and collective business benefits of the proposed application before making any judgements. Change management and risk management plans should be developed to underline areas that should be considered during implementation. Planning: This is the first official stage of the project in which initial project activities should be performed, such as the identification of key stakeholders and formation of a governing body and project team selection (including hiring new staff). Change and risk management strategies should be revisited
3.
4.
and updated if necessary. A project management plan should be developed to scope the project activities, project tasks should be scheduled and resources should be identified and allocated (including time and money). Accurate and timely information is very important for midsize businesses to execute project plans in accordance with their desired expectations. Therefore it is important that information should be accurate during entire planning process. Setup & Re-engineer: To execute the project effectively, it is important to structure the project team with the correct mix of technical and business professionals. As midsize businesses lack resources, it is crucial for them to decide whether they need to hire or acquire the necessary skills. Midsize businesses should identify and reassess their available resources (in-house expertise and money) to structure the project team according to the standard required for ERP implementation. The cost of implementation could be significantly high if there is a need of customisation in the application. Therefore, the organisation’s ideology should be examined to assess the staff and management’s attitude to change before taking any decisions. The guiding principles of the project should be identified and a business case analysis should be completed to underline the expectations. The ERP application should be installed in the development environment and business process mapping should take place with gap analysis. Internal team training should occur to equip existing organisational staff with the appropriate skill levels. For midsize businesses it would usually be wise to have right mix of in-house and third party technical expertise to avoid any surprises in the ‘post go-live’ phase. System Design: This is an important stage in which higher level design should be completed and approved. Extensive
An ERP Adoption Model for Midsize Businesses
5.
6.
communication and coordination is required to address organisational expectations and users should be engaged consistently during development process. Details in relation to user acceptance should be captured and documented. Staff and management’s attitude to change should be examined and the change management plan should be updated to cater for resistance to change. ERP applications are complex in nature, therefore, associated risks should be analysed and addressed by developing a suitable risk mitigation plan. An initial application interactive prototype should be completed to demonstrate application functionality. This functionality should also be compared with the midsize business expectations to ensure that it is addressing the business and technology needs. Configuration & Testing: Once the interactive prototype is completed, its comprehensive configuration should be executed in accordance with the requirements identified in the design document. Real data should be populated in test instances for system testing, test interfaces should be developed and reports should be documented and tested accordingly. During the entire testing process, staff should be engaged and extensive communication should be conducted at the organisational level. Information system functions should be assessed and prospective change should be coordinated. The project budgetary estimates should be assessed and existing staff skill levels should be reassessed. System and user testing should be completed in this stage. Installation & Go live: In this stage, all post testing activities should be executed, such as building the production environment, building the network (if required) and desktop installation (if required). User training should be completed and the system should go live in the production environment. The lessons learned from the implementation should
be documented, including change and risk management strategies, management of staff attitudes to change, ERP implementation, business technology issues and so forth. The system support should be ongoing to perform post production glitch analysis.
Future reseArcH developing and refining an erP Adoption model In order to refine the ERP adoption model for midsize businesses a research study containing quantitative and qualitative data collection stages is proposed (Leedy et al. 1997). The data collection focus in this study will primarily be based upon Australian midsize businesses; however it is also applicable to midsize businesses around the world. Stage one of the methodology roadmap will use surveys to provide an understanding of what is happening in midsize business market in relation to ERP adoption. The second stage of roadmap utilises case studies that will help to further refine the adoption model by determining the important reasons as to way midsize businesses should adopt ERP applications.
Stage One— Industry Survey Stage one of research methodology explores the implementation stages with reference to overall strategies that are important while implementing an ERP application and also the midsize business specific issues. In this stage, activities to be performed in each implementation stage, impact of relating strategies and midsize business relating issues, problems and benefits will be explored. The survey instrument will concentrate on around 200 midsize companies using ERP applications along with 200 non ERP using companies in Australia in an endeavour to determine the various operational & implementation factors, strategic issues associ-
503
An ERP Adoption Model for Midsize Businesses
ated with ERP implementation. The questionnaire will investigate the level of usage of ERP systems in mid-market and specially examine; the strategies that led to smooth implementation of ERP applications; selection of business solutions for a business type; issues and impact of proposed strategies during implementation process; factors affecting the implementation processes and sequence of activities performed in each implementation stage. The outcome of this survey will be used to generate a first iteration or revised version of ERP adoption model.
Stage Two— Industry Case Studies & Expert Panel The stage two of investigative methodology will consist of multiple case studies. The respondents of industry survey (stage one) will be asked to state their interest in a follow up interview. The purpose of this exercise is to conduct a qualitative analysis to examine each case in depth in order to understand the individual experiences of midsize businesses. This will provide an opportunity to obtain first hand knowledge about organisation and their understanding of issues in relation to ERP implementation in midsize businesses. The strategies adopted by these businesses to overcome issues relating to their implementation of ERP applications will also be discussed. This data collection stage will help to identify specific strategies that these businesses have adopted to help them address certain issues and problematic situations. The case studies will be categorised into three levels; • • •
504
Midsize businesses that have performed ERP implementation, Midsize companies that tried to implement an ERP application but ‘rolled back’ and Midsize businesses that have not tried but considering ERP solution for their problems
A total of fifteen to eighteen businesses will be interviewed and they will be selected depending upon responses received and their ERP implementation results. The results of second iteration of the model will be presented to ERP system experts to ascertain technological and managerial implications. The feedback gauged from ERP experts will confirm the industry findings and potentially also identify new or undocumented strategies that are directly related to the area of research. The data received from different sources will be evaluated and result will be used in further refinements of ERP adoption model. The resultant ERP adoption model derived from this two stage methodology will provide a road map for midsize businesses, incorporating strategies that should be considered in accordance with their situation, background, financial situation and applicability to implement ERP applications successfully.
conclusIon This chapter examines the impact of ERP implementation on midsize businesses by discussing factors with reference to strategies and processes that are important for ERP implementation in midsize businesses. There are many constraints being identified in relation to ERP implementation, especially when there is a need of their customisation. Sometimes businesses need to customise these applications to add or delete features to serve their business needs, hence it would be beneficial to identify business requirements and scope the project objective before its initiation. The chapter also examines the nature of midsize businesses and argues that ERP adoption is likely to be an important consideration for these businesses in the near future. Midsize businesses are mainly dependent upon many internal (organisational) and external (wider economic) influential factors. Arguably, some internal or external factors might force midsize businesses to adopt ERP applications, not only to make them more competitive
An ERP Adoption Model for Midsize Businesses
but also due to pressures associated with their larger counterparts. Therefore, it is important to underline factors that could impact midsize businesses during sophisticated application adoption processes. Implementation of an ERP system is different from any other software application due to its impact on business operations and requirement to facilitate business needs. There have been a number of attempts made by different researchers to produce an effective framework, enabling businesses to have a strategic direction for ERP implementation. ERP implementation models were developed specifically to focus on the identification of large enterprise implementation requirements and activities/ stages that are crucial for their implementation. It would be beneficial to utilise existing knowledge base while developing a strategic model for ERP adoption in midsize businesses. This model should focus on strategic issues faced by midsize businesses providing guidelines that could help to mitigate associated risks to ERP implementation. ERP adoption is also discussed from aspect of number of different implementation methodologies that are traditional, process focussed, marketing enabled, strategy or process oriented. It is argued that a strategic approach focusing on organisational, technical and people area alongside with midsize business factor analysis would be desirable while outlining the activities to be performed in each implementation stage. Hence, a conceptual ERP adoption model would include a strategic approach to investigate and deliver a road map to a limited midsize business environment. To test and reform the proposed ERP adoption model, a methodological roadmap is documented that embraced quantitative and qualitative data collection stages and will be tested on Australian midsize business market. The quantitative stage will be associated with the capture of business characteristics that detail the scope of ERP systems adoption and identification of salient aspects of strategy across the three areas of interest (organi-
sational, people, organisational). The qualitative stage of the roadmap involves the capture of individual experiences of midsize businesses adoption through the case study method in order to gain first hand knowledge about the organisations and the ERP strategies they implemented. Each stage of the roadmap will allow the progressive development and refinement of the proposed ERP adoption model.
reFerences Aberdeen Group. (2006). ERP in the mid-market. Boston: Aberdeen Group, Inc. Al-Mashari, M. (2002). ERP Systems: A research agenda. Industrial Management & Data Systems, 165–170. doi:10.1108/02635570210421354 Al-Mashari, M., & Zairi, M. (2000). Information and business process equality: The case of SAP R/3 implementation. Electronic Journal on Information Systems in Developing Countries, 2. Aladwani, A. M. (2001). Change management strategies for successful ERP implementation. Business Process Management Journal, 7(3), 266–275. doi:10.1108/14637150110392764 Alison, C. (2002, Dec). Works management. HortonKir by, 55(12), 30–33. Amoako-Gyampah, K. (2007). Perceived usefulness, user involvement and behavioural intention: an empirical study of ERP implementation. Computers in Human Behavior, 23, 1232–1248. doi:10.1016/j.chb.2004.12.002 APEC Profile of SMEs. (2003). What is an SME? Definitions and statistical issues. Journal of Enterprising Culture, 11(3), 173–183. doi:10.1142/ S021849580300010X Bancroft, N., Seip, H., & Sprengel, A. (1998). Implementing SAP R/3 (2nd ed.). Greenwich: Manning Publications.
505
An ERP Adoption Model for Midsize Businesses
Barad, M., & Gien, D. (2001). Linking improvement models to manufacturing strategies – A methodology for SMEs and other enterprises. International Journal of Production Research, 39(12), 2675–2695. doi:10.1080/002075400110051824 Boubekri, N. (2001). Technology Enablers for supply chain management. Integrated Manufacturing Systems, 12(6), 394–399. doi:10.1108/ EUM0000000006104 BRW. (2002, November). Fast 100 Issue. Cited by Business Technologies for SMEs, October 2003, Conference at Sydney. Bueno, S., & Salmeron, J. (2008). TAM-based success modelling in ERP. Interacting with Computers, 20, 515–523. doi:10.1016/j.intcom.2008.08.003 Caillaud, E., & Passemard, C. (2001). CIM and virtual enterprises: A case study in a SME. International Journal of Computer Integrated Manufacturing, 14(2), 168–174. doi:10.1080/09511920150216288 Chung, S. H., & Synder, C. A. (1999). ERP initiation- A historical perspective. Americas Conference on Information Systems, August 13-15, Milwaukee, WI, 1999 Correa, C. (1994). cited by Aladwani (2001). Change management strategies for successful ERP implementation. Business Process Management Journal, 7(3), 266–275.
Ferguson, R.B. (2004). ERP targets the midmarket. eWeek, 21(6), 41. Gable, G., & Stewart, G. (1999). SAP R/3 implementation issues for small to medium enterprises. Americas Conference on Information Systems, August 13-15, Milwaukee, WI. Gefen, D., & Ragowsky, A. (2005). A multi-level approach to measuring the benefits of an ERP system in manufacturing firms. Information Systems Management Journal, 22(1), 18–25. doi:10.1201 /1078/44912.22.1.20051201/85735.3 Keller, G., & Teufel, T. (1998). SAP R/3, process oriented implementation. Harlow: AddisonWesley. Kennerley, M., & Neely, A. (2001). Enterprise resource planning: Analysing the impact. Integrated Manufacturing Systems, 12(2), 103–113. doi:10.1108/09576060110384299 Klaus, H., Rosemann, M., & Gable, G. G. (2000). What is ERP? Information Systems Frontiers, 2(2), 141–162. doi:10.1023/A:1026543906354 Koch, C. (2008). The ABC of ERP. Enterprise Resource Planning Research Center. Retrieved from http://www.cio.com/research/erp/edit/erpbasics.html Lee, T. T. (2000). Apt ERP alternatives. New Straits Times-Management Times. Leedy, P. D. (1997). Practical research – Planning and design (6th ed.). NJ: Prentice-Hall, Inc.
Duxbury, L., Decady, Y., & Tse, A. (2002). Adoption and use of computer technology in Canadian small businesses: A comparative study. In Managing Information Technology in Small Business: Challenges & Solutions (pp. 22-23). Hershey, PA: Information Science Publishing.
Markus, M. L., Axline, S., Petrie, D., & Tanis, C. (2000). Learning from adopters’ experiences with ERP: problems encountered and success achieved. Journal of Information Technology, 15, 245–265. doi:10.1080/02683960010008944
Edward, W. N., Bernroider, N., & Tang, K. H. (2003). A preliminary empirical study of the diffusion of ERP systems in Austrian and British SMEs. Working Papers on Information Processing and Information Management.
Markus, M. L., & Tanis, C. (2000). In R.W. Zmud (Ed.), The enterprise systems experience – From adoption to success, in framing the domains of IT management: Projecting the future…… through the past (pp. 173-207).
506
An ERP Adoption Model for Midsize Businesses
Marnewick, C., & Labuschagne, L. (2005). A conceptual model for enterprise resource planning (ERP). Information Management & Computer Security, 13(2). doi:10.1108/09685220510589325
Rovere, L., & Lebre, R. (1996). IT diffusion in small and medium-sized enterprises: Elements for policy definition. Information Technology for Development, 7(4), 169–181.
Martin, M. (1998). An electronics firm will save big money by replacing six people…… not every company has been so lucky. Fortune, 137(2), 149–151.
Saccomano, A. (2003). ERP vendors consolidate. Journal of Commerce, 4(24), 46.
Monnerat, R., Carvalho, R., & Campos, R. (2008). Enterprise systems modeling: The ERP5 development process. [Fortaleza, Ceara, Brazil]. SAC, 08(March), 16–20. Parr, A., & Shanks, G. A. (2000). Model of ERP project implementation. Journal of Information Technology, 15, 289–303. doi:10.1080/02683960010009051 Pramukti, S. (2003). Establishing synergy between small companies and banks. JAKARTA POST 06/03/2003. Accession Number: 2W81194803776, Business Source Premier Ragowsky, A., & Gefen, D. (2008). What Makes the Competitive Contribution of ERP Strategic. The Data Base for Advances in Information Systems, 39(2). Rao, S. S. (2000). Enterprise resource planning: business needs and technologies. Industrial Management & Data Systems, 100(2), 81–88. doi:10.1108/02635570010286078 Raouf, A. (1998). Development of operations management in Pakistan. International Journal of Operations & Production Management, 18(7), 649–650. doi:10.1108/01443579810217602 Rooney, C., & Bangert, C. (2000). Is an ERP System Right for You? Adhesives Age, 43(9), 30–33. Ross, J. W. (1998). The ERP revolution: Surviving versus thriving. Centre for Information Systems Research, Sloan School of Management.
Sandoe, K., Corbitt, G., & Boykin, R. (2001). Enterprise Integration. New York: Wiley. Sarbutts, N. (2003). Can SMEs ‘do’ CSR? A practitioner’s views of the ways small-and mediumsized enterprises are able to manage reputation through corporate social responsibility. Journal of Communication Management, 7(4), 340–348. doi:10.1108/13632540310807476 Taylor, J. (1999). Management Accounting. Turban, E., Leidner, D., Mclean, E., & Wetherbe, J. (2006). Information Technology Management: Transforming Organisations in the Digital Economy (5th ed.). New York: John Wiley & Sons. Vandaie, R. (2008). The role of organizational knowledge management in successful ERP implementation projects. Knowledge-Based Systems, 21, 920–926. doi:10.1016/j.knosys.2008.04.001 Wei, C. (2008). Evaluating the performance of an ERP system based on the knowledge of ERP implementation objectives. International Journal of Advanced Manufacturing Technology, 39, 168–181. doi:10.1007/s00170-007-1189-3 Wilhelm, S., & Habermann, F. (2000). Making ERP a success. Communications of the ACM, 43(4), 57–61. doi:10.1145/332051.332073 Yang, J., Wu, C., & Tsai, C. (2007). Selection of an ERP system for a construction firm in Taiwan: A case study. Automation in Construction, 16, 787–796. doi:10.1016/j.autcon.2007.02.001 Yates, I. (2004). 2004 Proved Successful for SAP Latin America. Caribbean Business,33(10).
This work was previously published in Enterprise Information Systems for Business Integration in SMEs: Technological, Organizational, and Social Dimensions, edited by Maria Manuela Cruz-Cunha, pp. 153-174, copyright 2010 by Business Science Reference (an imprint of IGI Global). 507
508
Chapter 2.11
Developing and Customizing Federated ERP Systems Daniel Lübke Leibniz Universität Hannover, Germany Jorge Marx Gómez University Oldenburg, Germany
AbstrAct Small and Medium Enterprises (SMEs) are the most important drivers in many economies. Due to their flexibility and willingness to innovate they can stand up to larger industry players. However, SMEs – as every other company – need to further reduce costs and optimize their business in order to stay competitive. Larger enterprises utilize ERP systems and other IT support for reducing costs and time in their business processes. SMEs lack behind because the introduction and maintenance of ERP systems are too expensive, the return on investment is achieved too late and the associated financial risks are too high. However, SMEs would like to have IT support for their business. The research behind the Federated ERP System (FERP) addresses the problems SMEs face with conventional ERP systems and offers reasonable and scalable IT support. This is done by decomposing the whole business logic of the ERP system into Web services, which are linked at run-time.
The service composition is realized by a workflow system that is also responsible for creating and managing the user interfaces and the data-flow. By integrating only the Web services that are needed (possibly from third parties) the cost is reduced and the functionality can be scaled to the actual needs. However, not only a technical solution is needed but also the development process must be tailored towards SMEs. Small companies cannot afford highly-skilled staff and often do not have defined business processes.
IntroductIon The business world is rapidly moving and Small-to-Medium Size Enterprises (SMEs) are competing within this vibrant marketplace with their flexibility and ability to innovate. They are an important part of the economy. For example, according to the IfM Bonn (2008) SMEs in Germany account for 38.3% of the overall turnover
Copyright © 2011, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Developing and Customizing Federated ERP Systems
Figure 1. Reference architecture of an FERP system
and employ 70.6% of all employees nationwide. In order to operate efficiently, SMEs need enterprise software, like ERP systems, for managing their business operations efficiently. However, ERP systems impose high costs due to their expensive purchase, customizing costs and re-customizing costs whenever business processes are changed. Thus, business process changes that are necessary to stay competitive become more costly as before. This inevitably leads to the question how to make ERP systems better suited to SMEs in order to make them more competitive in the long run. The answer to this question is decomposed into two parts. The first part is a new architecture for such systems that can be introduced, operated, and maintained cheaper. The second part is engaged with the question on how to come to (new) requirements for the ERP system based on the business processes. A system that can be flexibly
changed is worthless if no one knows what the desired result is. Within this chapter we introduce the Federated ERP System as a new architecture for ERP systems that are especially suited to SMEs. We describe the overall architectural ideas as well as our implementation. In the second part we present a technique for deriving and discovering business processes from textual scenarios – so called use cases known from the software engineering domain.
FederAted erP systems Problem Addressed An ERP system is a standard software system which provides functionality to integrate and automate the business practices associated with
509
Developing and Customizing Federated ERP Systems
the operation or production aspects of a company. The integration is based on a common data model for all system components and extents to more than one enterprise sector (see Robey et al., 2002; Rautenstrauch et al., 2003). However, there are some disadvantages associated with conventional ERP systems. The main ones are: • • •
In most cases not all of the installed components are needed, high-end computer hardware is required to run the system, and customization of ERP systems is very expensive because product specific know-how of experts is necessary.
Due to the expensive process of installation and maintenance only large enterprises can afford complex ERP systems, which provide business logic for all sectors of the functional enterprise organization. Contrary to these aspects, FERP systems allow the separation of local and remote functions whereby no local resources are wasted for unnecessary components. Furthermore, single components are executable on small computers and due to decreasing complexity of the local system installation and maintenance costs subside, too.
reference Architecture Figure 1 gives an overview of the reference architecture of a Web Service-based FERP system. The architecture consists of several subsystems, which are interconnected. Because one of the main objective of an FERP system is to integrate business components of different vendors, all components have to comply with standards. In this approach these standards are described by using XML schema documents. In order to separate the three different layers of a typical layered architecture of conventional ERP systems, each layer is assigned its own standard.
510
The subsystems of the proposed architecture are the following:
FERP Workflow System (FWfS) The FWfS coordinates all business processes which have to be described in an appropriate XML-based workflow language. A workflow in this context is a plan of sequentially or parallelly chained functions as working steps. Each step represents an activity which leads to the creation or utilization of business benefits. Workflows implicitly contain the business logic of the overall system. The function types that can be contained in a workflow in FERP systems are the following: • • •
Model-based user interface functions, e.g. show, edit, select, control Database access functions, e.g. read, update Application tasks which are connected to Web Service calls
FERP User System (FUS) The FUS is the subsystem which implements functions for the visualization of graphical elements and coordinates interactions with end users. This subsystem is able to generate user screens at runtime. Screen descriptions, which have to comply with the FERP UI standard, are transformed to an end device-readable format, e.g. HTML in case of web browsers.
FERP Database System (FDS) The FDS is the subsystem which implements functions for the communication with the FERP database. This subsystem is able to interpret XML structures which comply with the FERP data standard. The interface differentiates between two kinds of requests. Database update requests contain object oriented representations of business entities as XML trees. Database read requests
Developing and Customizing Federated ERP Systems
Figure 2. Architecture of the prototype
contain X-Path or X-Query expressions specifying portions of data to be extracted. In both cases the request parameters have to be transformed into different types of request statements that vary depending on the type of database management system (DBMS) that is used. Assuming the use of a relational DBMS (RDBMS), the underlying data model also has to comply with the FERP data standard, which means that the corresponding table structure has to reflect the XML-Schema specifications respectively. The java.net project hyperjaxb21 provides a solution to generate SQL statements on the basis of XML schema definitions. Another solution is the application of native XML databases or XML-enabled RDBMS.
FERP Web Service Consumer System (FWCS) The business logic of FERP systems is encapsulated in so called FERP business components
which are wrapped in a Web Service. The FWCS is the subsystem that provides the functionality for the invocation of Web Services. All possible types of FERP Web Services are specified by the FERP WS standard. This standard contains XML schema definitions that describe Web Service operations as well as input and output messages. A Web Service references these types in its WSDL description. Furthermore this subsystem is able to search for Web Services, which are defined by a unique identifier. This way it is possible that different Web Service providers implement the same business component type as Web Service. Beside the implementation of Web Service invocation and search functionality this subsystem is responsible for the interpretation and consideration of non-functional parameters. Examples for those parameters are security policies, payment polices, and Quality of Service (QoS) requirements on the part of Web Service consumers.
511
Developing and Customizing Federated ERP Systems
Figure 3. Process model in YAWL as simplified example for the creation of a purchase order
FERP Web Service Provider System (FWPS) The FWPS is the subsystem which implements functions for the provision of Web Services which comply with the FERP WS Standard. The subsystem includes a Web Server which is responsible for the interpretation of incoming and outgoing HTTP requests which in turn encapsulate SOAP requests. The subsystem provides business components of the FERP system as Web Services. A connection to the FERP Web Service Directory allows the publication of Web Services. Furthermore this subsystem is responsible for the negotiation of common communication policies such as e.g. security protocols or usage fees with the requesting client.
512
FERP Web Service Directory (FWD) The FWD provides an interface for the publication and the searching of FERP Web Services based on the UDDI standard. The structure of this registry leans on the FERP WS standard. In this standard Web Services are assigned to categories mirroring the predetermined functional organization of enterprises.
Prototype development The following paragraph briefly describes a first implementation of the proposed reference architecture which is based on open source software components. Figure 2 shows the architecture of our prototype. For the implementation of the FWfS we chose the workflow engine of the YAWL project2. The FUS was implemented on the basis
Developing and Customizing Federated ERP Systems
Figure 4. Generation of a simple user interface for a customer record
of Apache Struts3. Our FDS is mainly based on the API of the Hyperjaxb2 project which in turn uses JAXB4 and Hibernate5. jUDDI6 served as basis for implementation of the FWD. The FWCS uses JAX-RPC7 (Java API for XML-based RPC) which is provided by the SUN Developer Network (SDN). Our FWPS uses Apache AXIS8 as basis for the provision of Web Services. Figure 3 shows an example process model in YAWL Tasks in our process definitions can be assigned to one of the three function types: • • •
Database communication (in figure 3 indicated as DB-task) End-user communication (in figure 3 indicated as GUI-task) Web Service communication (in figure 3 indicated as WS-task)
All other symbols comply with the graphical notation of YAWL. The example process model demonstrates a workflow for the creation of a purchase order9. The example includes only one Web Service call which is responsible for the calculation of the total sum of a purchase order which consists of one or more order items. Order items include a price and an amount. The Web Service receives the whole order as XML document without total sum. Having finished
the calculation of the total sum the Web Service returns the completed order as XML document. The next workflow task visualizes this XML document. After the user agreed the XML document is transmitted to the FERP database system which transforms it to an SQL-INSERT statement in the next workflow task.
user Interface Generation Every ERP system needs to be operated by users. In the end, they need to make decisions, retrieve data or enter new records. While classical ERP systems offer clients for personal computers only, now mobile devices, like handhelds and mobile phones are emerging. Because of this situation the Federated ERP system will face many types of clients. Furthermore, these clients need to be easily updatable. For a simple process change it is not feasible to update hundreds of possibly mobile or distributed computers. Thus the user interface must be managed on the server-side and must be platform-neutral. Our approach for minimizing the effort needed to develop and customize the user interface is to automatically generate the interfaces from the business process descriptions. Much research has been done in the field of model-based user interface (MB-UI), which aims to model user interfaces in the way program logic is modeled in UML. Research in these fields has been going on for more
513
Developing and Customizing Federated ERP Systems
Figure 5. Refinement of the business process and user interface generation
than a decade. For example, (Paterno, 1999) gives an overview over the field of MB-UI. Numerous design environments have been proposed as result of MB-UI. Each differs in the number and type of models used (for a thorough overview the reader is referred to (da Silva, 2000). However, most approaches share a common element: the task model. Fortunately, this task model is easily related to our approach: The business process model is in fact a task model on a very high abstraction level (see Traetteberg, 1999). Furthermore, the field of MB-UI has matured. Especially insight into reasons for failure of some approaches has been beneficial for our research. Common mistakes and problems concerning practical adoption of MB-UI techniques are listed by (Traetteberg et. al., 2004): The biggest problem has been the complexity of the introduced models. While complex and detailed models give the designer the best level of control, such models are difficult to learn, time-consuming to design and hard to maintain. Therefore, our approach particularly
514
strives to reduce the inherent complexity. This is especially important for being useful for the targeted, non-expert audience. Because we assume the business process to be already modeled, the user interface is expressed by stereotyping business functions. Four stereotypes have been introduced: •
•
•
•
Selection: The user shall select data from a collection of possible choices. For example: Select product from a catalogue. Edit: The user shall edit some information object from the data model. For example, edit order. Control: The user wants to explicitly invoke some action. This is used to model navigational decisions. For example, “Accept order”. User: The user has to do something by himself, e.g. planning, comparing, etc.
Developing and Customizing Federated ERP Systems
These four actions can be attached to a business function and are visualized by small icons on the left-hand side. The annotated business processes are downloaded by the client software, which generates user interfaces from these models and sends the data and user decisions back to the server. This way, the user interface can be edited simply by installing new business process models on the server. For the generation of user interfaces, the data types are used to look up matching editors. Because Web services are based on XML, the data types are represented by XML Schema definitions. XML Schema defines data types recursively: Primitive types can be grouped to complex types. Complex and primitive types can be grouped to new complex types and so on. Editors are created by traversing this structure and look for matching editors registered in the system. At least for each primitive type, like integers and strings, an editor is provided by the system. Therefore, a (possibly primitive) editor can be generated for each XML Schema. Figure 4 shows a simple generated editor for a customer record. Figure 5 shows the hierarchical refinement of the example process with user interface stereotypes, and the resulting user interface using a custom editor. The client application shows the processes needing further action by the user, and the processes which are currently executed by someone else. This information is given on the right hand side. Since the user interface generation is based on the business process description, context information can be given to the user. For example, descriptions of the currently active business function can be displayed. In our prototype these are realized by giving tool-tip information. At this point it is even possible to integrate experience bases to facilitate the communication between developers, process designers and end users. Since this approach is based on the process description only, it is possible to generate user interfaces for different target platforms. For
example, a connector for XForms – an XML standard for describing input forms - is under development and generation of HTML pages is possible as well for integration into intranet and portal applications. For further discussion on the topic of generation of user interfaces from EPC models (see Lüecke, 2005; Lübke et. al., 2006).
FerP as an soA Instance Because of FERP system is based on Web Services and their orchestration, we can say that it is a service-oriented architecture solution whereby all functions are available as services and unambiguously addressable. This system works as follows: 1. 2.
3.
4.
5.
6.
The network consists of service consuming and service providing network nodes. Each client, which provides an interface to an enterprise is called mandator and is connected to the enterprise database. The processing steps of a business process are stored in the local database of a mandator as workflow. A workflow in this context is a plan of sequentially or in parallel chained functions as working steps in the meaning of activities which lead to the creation or utilization of business benefits. Finding a function within the P2P-network means that a request which contains the function type must be send to all service providing peers. After receiving the responses to a function type request the mandator must elect a network node to be accessed. A function call contains parameters as business objects and other (primitive) values that are delivered to the service providing network node. A business object in this context is a snapshot of the enterprise database at a particular time in a standardized format. Function calls can contain other function calls.
515
Developing and Customizing Federated ERP Systems
7.
8.
A function returns a list of either directly modified business objects or independent values that are necessary for subsequent business object updates (e.g. intermediate data). Returned business objects must be synchronized with the local database of the mandator.
busIness Process modelInG In tHe FerP context The FERP system is targeted at SMEs. This poses some additional challenges besides the already outlined technical problems. Especially the gathering of requirements - and as their most important part business processes – has to be performed before any implementation of an (F)ERP system can start. However, in most SMEs the business processes are not defined explicitly, but the organization as a whole has tacit knowledge of the activities that are to be performed and the order in which
they have to be performed. But even if SMEs have documented business processes, these descriptions are typically not suited for software development or customization because technical details are missing. Therefore, the knowledge of process participants needs to be externalized into documented business processes that are suited to be the basis for software development and customization projects because they lack sufficient detail. Within the FERP context, we propose a lightweight approach to elicitate the business processes and to generate explicit models from there. These explicit business process models can be used to pre-generate and develop the workflows in FERP systems that are the technical representation of the business. The technique is constrained for this particular context as following: •
•
Easy to use: The technique must easy to use because SME employees usually do not have extensive technical knowledge and no time for learning complex techniques. Easy to understand: When discussing results it is important that all stakeholders can par-
Table 1.
516
Use Case
#3: Thesis Supervisor hands out topic
Primary Actor
Thesis Supervisor
Stakeholders
Thesis Supervisor: wants to hand out topic easily and without much paperwork Student: wants to receive topic quickly Secretary: wants easy to use/read forms for completing registration
Minimal Guarantees
A topic is only handed out once at a time
Successs Guarantees
Student knows topic Supervisor knows all needed adminstrative information of the student
Preconditions
Student has achieved at least 80% of credit points Student has clearance from Academic Examination Office
Triggers
Student wants to sign up for a topic
Main Success Scenario
1. Supervisor checks whether the topic is still available or not 2. Supervisor reserves topic for student 3. System updates list of current thesis 4. Supervisor confirms thesis topic and student’s information to the Academic Examination Office 5. System sends confirmation to student, supervisor and Academic Examination Office
Extensions
1a If topic is not available anymore, then EXIT
Developing and Customizing Federated ERP Systems
•
•
ticipate and contribute. Therefore, the same restrictions apply to understanding as they do to usage. Lightweight: The technique must quickly save time for associated employees. Not much effort may be spend as this would pose undue costs and would hinder flexibility. Offer basis for later development: The results must be usable by the customization team later on. The smoother this transition is the better.
of view of a single main actor. The most common form for documenting use cases are tables as illustrated in table 1: The table contains additional information like preconditions and success conditions that express goals and constraints from the business and software point of view that are associated with the use case. For our approach, we assume the use cases to conform to the meta-model as illustrated in figure 6. The metamodel captures the properties of the tabular template. A use case consists primarily of the main scenario that has a sequence of steps. Each step can be extended for operational sequences that are not default. Each extension consists of a new scenario. Each step within a scenario has an actor, i.e. the role or person that is performing the activity. A use case is written from the perspective of a main actor that is primarily concerned with
use cases as the basis We use a use case-based approach for interviewing the users and stakeholders in SMEs and documenting the results. Use Cases (Cockburn, 2005) are a technique from the Requirements Engineering community. They represent possible scenarios from the point Figure 6. Use case metamodel cd: use case metamodel(uc) *
Project
+sets
* * condition
* *
+ condition
usecaseset
+ triggers + preconditions +minimal g.
usecase
1 +mainScenario
inv: this.returnJump. scenario != this.extensionScenario scenario
+ extensionScenario 1
0..1
+ success g
decomposition 1
* stakeholder
Actor
Human Actor
computer system
1 + actor
*
1..* {ordered} step
* + extended by
0..1 + return jump
extension
517
Developing and Customizing Federated ERP Systems
the goals the use case has. The goals are represented by success guarantees. A use case starts when the trigger applies and may be performed iff all preconditions are met. Because use cases are nearly freely-written text, they have the advantage that they are easily comprehendible by all kinds of users and stakeholders (Lübke, 2006). Furthermore, the tabular structure imposes a semi-formal format that can be processed by computer programs later on.
Because use cases conforming to the presented metamodel must not contain complex controlflow constructs, they only require a limited set of workflow-patterns (van der Aalst, ter Hofstede, Kiepuszewski, & Barros, 2003). Following workflow-patterns are needed within a use case: • •
Sequence: A sequence is used to order the steps within a scenario, Exclusive Choice: For attaching extensions to a step, this type of split in the control-flow is needed, Simple Merge: When extension jump back, the control-flow is merged with this type of merges.
mapping use cases to business Processes
•
Use Cases are written from the perspective of the main actor only. However, a set of use cases can represent a business process. Therefore, the individual use cases need to be joined together into one large process. The set of use cases and a use case itself can represent a business process. The steps within the use case can be seen as the activities of the business process. The actors of the step become the actors of the business activities.
However, it is necessary to join use cases. This is done by comparing the triggers, preconditions, and success guarantees. The use cases are ordered in a way that success guarantees satisfy triggers and preconditions of the following use cases. Because a success guarantee can satisfy the preconditions of more than one use case, and the precondition and trigger of a use case can be
Figure 7. Joining multiple use cases in EPC notation to a single EPC model
518
Developing and Customizing Federated ERP Systems
satisfied by more than one use case, following workflow-patterns are needed: •
Parallel Split: The control-flow of one use case is split to several use cases because more than one precondition can be satisfied, Generalized Synchronized Merge: The precondition can be satisfied by one of many use cases.
•
Because all business process languages support these basic workflow-patterns, the generation can have any language as its target. We demonstrate this by generating EPCs that can be the basis for composing the Web services of an FERP system.
Generation of event-driven Process chains For demonstrating our approach we chose EventDriven Process Chains (EPCs). They are easy to understand and are therefore well-suited for fostering communication between the stakeholders of FERP systems. The generation of a single use case is done by applying following algorithm: 1. ° 2.
3.
°
° 4. °
Preconditions and Triggers are converted to events If there is more than one event, the events will be joined by an AND-join Each Step of the main scenario is converted to a function Connected with simple okay-events Success guarantees are converted to endevents If there is more than one event, an ANDsplit is introduced Extensions are introduced with an XORsplit The extension condition becomes an event
5.
The extension scenario is converted like the main scenario ° Extensions of extensions are handled recursively 6. Return jumps are realized with an XORjoin ° The join is introduced before the function that is the jump target This algorithm is applied to every use case. In the following step, the use cases have to be joined. Depending on different needs, several join strategies are available: • •
•
Large EPC: Generates a single, large EPC model from all use cases. Short EPC: Generates an EPC that has a function for each use case. The details of the use case are discarded and not displayed. This type of model is well-suited for discussing the ordering of use cases and the global control-flow. Short EPC with hierarchical refinement: EPCs allow function to be detailed in other EPCs. This approach combines the advantages of the first two approaches by generating a short EPC and placing a more detailed EPC for the use case behind every function.
Common to all three approaches is the strategy for merging the set of use cases to a single model. The events that have been generated by the algorithm above are unified, i.e. all events that have the same name are reduced to a single event. Connectors for the control-flow are introduced accordingly. This is illustrated in figure 7.
Advantages of the Use CaseCentered Approach By using textual use cases, it is possible to document and discover business processes. Due to the use of plain, semi-formal text, they can be used
519
Developing and Customizing Federated ERP Systems
by all involved stakeholders. These stakeholders do not need to learn a new notation nor do they have difficulties while interpreting and validating the documented parts of business processes. Use Cases can guide interviews with isolated stakeholders. The generation generates a global view by combining the use cases to a large business process model. The business process model is the foundation for later ongoing development. It can be readily used by developers. Therefore, our approach satisfies the constraints outlined above nicely.
conclusIon And outlook Within this chapter we have presented the FERP architecture as an architecture for ERP systems that are well-suited for SMEs; such systems can be flexibly altered, and are comparably cheap to install and maintain. However, in order to know what to change and what to install, business processes need to be defined first. Within SMEs such processes need to be documented because usually no explicit business process documentation exists. We proposed a use case-centered approach for elicitating the business processes in a way that is comprehensibly by non-tech-savvy people. With the combination of these two parts, SMEs can introduce and maintain their ERP systems and can stay competitive in the market. While the FERP architecture was developed for addressing requirements of SMEs, the architecture may be suitable for larger companies as well. The assessment to what extend the architecture can scale will be part of our future work.
reFerences Cockburn, A. (2005). Writing Effective Use Cases. Amsterdam: Addison-Wesley Longman. da Silva, P. P. (2002). User Interface Declara-
520
tive Models and Development Environments: A Survey. In Palanque & Patern`o (Eds.), DSV-IS, volume 1946 of Lecture Notes in Computer Science (pp. 207–226). London: Springer. IfM Bonn (2008). Schlüsselzahlen Deutschland (Key Indicators Germany). Retrieved December 28, 2008, from http://www.ifm-bonn.org/index. php?id=99, 2008-12-28. Lübke, D., Lüecke, T., Schneider, K., & Marx Gómez, J. (2006). Using Event-Driven Process Chains for Model-Driven Development of Business Applications. In Nüttgens & Mendling (Eds.), Proceedings of the XML4BPM 2006. Lübke, D. (2006). Transformation of Use Cases to EPC Models. In M. Nüttgens, F. Rump, & J. Mendling (Eds.), Proceedings of the EPK 2006. CEUR Proceedings Vol 224. http://ftp.informatik.rwth-aachen.de/Publications/CEUR-WS/ Vol-224/. Lüecke, T. (2005). Development of a Concept for Creating and Managing User Interfaces bound to Business Processes. Master’s Thesis, Leibniz Universität Hannover, Germany. Paterno, F. (1999). Model-Based Design and Evaluation of Interactive Applications. London, United Kingdom: Springer-Verlag. Rautenstrauch, C., & Schulze, T. (2003). Informatik für Wirtschaftswissenschaftler und Wirtschaftsinformatiker, Berlin. Robey, D., Ross, J., & and Boudreau, M. (2002). Learning to implement enterprise systems: An exploratory study of the dialectics of change. Journal of Management Information Systems, 19(1), 17-46. Trætteberg, H. (1999). Modelling Work. Workflow and Task Modelling. In Vanderdonckt, & Puerta (Eds.), CADUI (pp. 275-280). Kluwer. Trætteberg, H., Molina, P. J., & Nunes, N. J. (2004). Making model-based UI design practical: usable
Developing and Customizing Federated ERP Systems
and open methods and tools. In Vanderdonckt, Nunes, & Rich (Eds.), Intelligent User Interfaces (pp. 376–377). ACM. van der Aalst, W., ter Hofstede, A.H.M., Kiepuszewski, B., & Barros, A. P. (n.d.). Workflow Patterns. Journal of Distributed and Parallel Databases, 3(14), 5-51.
endnotes 1
2 3 4 5 6 7 8 9
http://yawlfoundation.org/ http://sturts.apache.org/ http://jaxp.dev.java.net/ http://www.hibernate.org/ http://ws.apache.org/juddi/ http://java.sun.com/webservices/jaxrpc/ http://ws.apache.org/axis/ In order to improve understandability the process was simplified. Changes of entered data and order items are not supported.
Hyperjaxb2 – relational persistence for JAXB objects: https://hyperjaxb2.dev.java. net/ (last visit October 2006)
This work was previously published in Enterprise Information Systems for Business Integration in SMEs: Technological, Organizational, and Social Dimensions, edited by Maria Manuela Cruz-Cunha, pp. 286-299, copyright 2010 by Business Science Reference (an imprint of IGI Global).
521
522
Chapter 2.12
Creation of a Process Framework for Transitioning to a Mobile Enterprise Bhuvan Unhelkar MethodScience.com & University of Western Sydney, Australia
AbstrAct
IntroductIon
This chapter presents the creation of a process framework that can be used by enterprises in order to transition to mobile enterprises. This framework facilitates adoption of mobile technologies by organizations in a strategic manner. A mobile enterprise transition framework provides a process for transition that is based on the factors that influence such transition. The Mobile Enterprise Transition (MET) framework, outlined in this chapter, is based on the four dimensions of economy, technology, methodology, and sociology. These four dimensions for MET have been identified based on an understanding of people, processes, and technologies. A research project undertaken by the author validates these four dimensions.
This chapter presents an approach to transitioning to mobile enterprises. The earlier outline of this approach was published by Unhelkar (2005) and it contained three dimensions of a mobile enterprise transition framework. Later, based on the research undertaken by the author, this transition framework was modified and extended to result in a four dimensional framework. This framework is the core discussion topic of this chapter. A complete and in-depth discussion of this process framework also appears in Unhelkar (2008). Mobile technologies form the basis of the communications revolution that has resulted in elimination of physical connectivity for people, processes and things. This wireless connectivity has resulted in significant impact on the organization of the business and its relationship with the customers. The ability of businesses and customers
DOI: 10.4018/978-1-60566-156-8.ch006
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Creation of a Process Framework for Transitioning to a Mobile Enterprise
to connect to each other ubiquitously-independent of time and location – using mobile technologies is the core driver of this change. However, successful changes in terms of adoption of mobile technologies and applications in an organization depend on a process framework. This chapter discusses a Mobile Enterprise Transitions framework for transitioning an organization to a mobile organization. The purpose of this MET framework is to provide guidance in terms of people, processes and technologies involved in successful transitioning of the enterprises. A MET can be defined by extending and refining an earlier definition of mobile transformation given by Marmaridis and Unhelkar (2005) as “evolution of business practices through the adoption of suitable mobile technologies and processes resulting in pervasiveness.” This definition suggests that the MET will facilitate incorporation of mobile technologies in business processes that will result in pervasive business activities independent of location and time. The understanding of MET, however, needs to be based on a firm understanding of how mobility is unique and how it is different to land-based Internet connectivity.
consIderInG tHe nAture oF mobIlIty In tHe “met” FrAmeWork Electronic business transitions have been studied, amongst others, by Ginige et al (2001), Lan and Unhelkar (2005). However, the uniqueness of mobile technologies in terms of their impact on business has been discussed by Marmaridis and Unhelkar (2005), Arunatileka and Unhelkar (2003), Godbole and Unhelkar (2003), Lan and Unhelkar (2005), and Unhelkar (2008). These authors have focussed on the specific nature of mobility as depicted in Figure 1. The inner square in Figure 1 indicates land-based connectivity between enterprises, functional units and other fixed devices. This connectivity evolved from the initial centralized connectivity of the mainframe, followed by the client-server connectivity and finally resulting in the Internet connectivity (business to business - B2B and business to customer - B2C). The Internet-based connectivity is further augmented by the XML (eXtensible Markup Language) to facilitate the Internet as a medium of computing, rather than merely as a means of communication. However, as depicted by the outer square in Figure 1, the external wireless connectivity, by its very nature, is between an individual and the business or between two individuals. As correctly stressed
Figure 1. Mobility is personal in nature (based on Unhelkar, 2005)
523
Creation of a Process Framework for Transitioning to a Mobile Enterprise
by Elliott and Phillips (2004), a mobile phone is a far more personal device that is carried by an individual as compared with a desktop personal computer. This nature of wireless connectivity needs to be understood and incorporated in all dimensions of MET (this could be based on discussions such as Thai et.al (2003); the four dimensions are discussed next). For example, economically, the cost of a mobile device has dropped and continues to drop significantly making is obligatory for businesses to consider mobility in order to access and serve the customer. Technically, it is essential to consider the “individuality” of the mobile gadgets and their ability to be location-aware (see Adam and Katos (2005) for the resultant challenges of security and privacy of individuals that is different to the challenges in corresponding land-based connections). The activities and tasks of the methodological dimension should be created in a way that properly exploits the hand-held gadgets; and socially, mobile transitions need to consider the impact of mobility on the sociocultural fabric of the society and the corresponding changing value systems like work ethics and social protocols. Thus, when businesses transitions from the land-based connectivity paradigm and incorporate wireless connectivity in their business practices, they have to ensure that an individual is considered in all their process dimensions as compared to when the business was organized only around land-based workstations.
tHe Four dImensIons oF A mobIle enterPrIse trAnsItIon FrAmeWork When an organization decides to incorporate mobility in its business, it is a strategic decision that is based on the primary question of “why” to “mobilize”. This strategic decision is followed by investigations into the technical, methodological and social dimensions dealing respectively with
524
the “what”, “how” and “who” of the process. The transition process framework thus reveals itself into four major dimensions as shown in Figure 2 (based on Unhelkar (2003a) and (2008)). These dimensions of mobile transitions are, however, not exclusive to each other. When applied in practice, they tend to overlap each other, resulting in a cohesive transition process. However, separate understanding of each of these dimensions is helpful in creating the MET framework in the first place. Thus, this discussion starts with the theoretical framework and then discusses its application in practice. The four process dimensions for a mobile enterprise transition, as shown in Figure 2, can be understood as follows: •
•
The Economic dimension deals with the core business driver of “why” to undertake the transition. Costs and competition have been the core business drivers for most business decisions and they are also true in this case. Reduction in costs and increase in competition encourage the business to undertake formal MET. The Technical dimension of the mobile transformation process considers “what” technologies are used in the transformation, and “what” deliverables are produced at the end of the process. Examples of is-
Figure 2. Four dimensions of a mobile enterprise transition framework (based on Unhelkar, 2008)
Creation of a Process Framework for Transitioning to a Mobile Enterprise
•
•
sues discussed in this dimension include devices/gadgets, programming, databases, networking, security and architecture of mobile technologies participating in the MET. The Methodological dimension of the process deals primarily with the question of “how” to – amongst other things- model and design business processes, approach methodologies and quality in software processes and re-organize business structure and its relationship with customers and employees. The Social dimension of the transformation process focuses on “who” is involved in and influenced by the process. Typically these are the users of the business (e.g. customers and employees) The discussion in this dimension deals with the effect of mobile technologies on the socio-cultural aspects of people’s lives -especially the changing working formats and work ethics– and organizational and social structures.
The aforementioned four dimensions of MET are now discussed in greater detail.
economIc dImensIon In m-trAnsFormAtIon The question of “why” to “mobilize” provides the strategic reason for a business undertaking MET. The economic dimension considers the costs of running the business and maps it with the cost of acquiring mobility. Furthermore, the economic dimension is also concerned with the competition and how they are putting pressure on the business. Furthermore, the economic dimension also investigates the potential effect of mobility on customers as well as employees of the organization in terms of providing efficient service to the customers by conducting the business efficiently. Thus, “why” to “mobilize” is a strategic decision
undertaken by the business decision makers of the organization. This decision to transition using the MET process framework can take the business to a global mobile playing field and therefore it has to be taken carefully and seriously.
tecHnIcAl dImensIon In m-trAnsFormAtIon This technical dimension of the MET framework is concerned with the question of “what” technology to use, and “what” deliverables are produced at the end of the process. Thus, the technical dimension primarily includes the understanding and the application of the various mobile hardware devices and gadgets, issues of GPS enabled gadgets (3G) and wireless networking and security. The types of devices, their capacity and costs, and their usability are of utmost importance in this dimension. Mobile devices have developed well beyond the ubiquitous mobile phone, and now include a wide range of devices like Personal Digital Assistants (PDA) and wireless computers (typically laptops with inbuilt wireless processors). Each device can have numerous features that need to be considered when they are incorporated in the business processes. For example, the ability of Wi-Fi enabled mobile gadgets to take photographs onsite and to instantly transmit them to the business centre has an invaluable application in processes that are related; for example to insurance claims, medical emergencies and sports. Additional device related issues include the ability of the devices to function alone -standalone-, or with other wireless components; for example, some wireless PCMCIA cards cannot be connected to the Internet and receive SMS messages simultaneously. Functionality of mobile devices are further augmented by mobile networks, which play a significant role in terms of the device’s abilities to browse, locate and transmit information. Some examples of technological challenges in networking include the incorporation of wireless
525
Creation of a Process Framework for Transitioning to a Mobile Enterprise
broadband services, creation of local hotspots for services, integration of devices and their software with the existing organizational infrastructure, combination of WLAN (Wireless Local Area Network) and WWAN (Wireless Wide Area Network), satellite communications and even simple issues like user access through VPN or dialup connections. Mobile networking influences the breadth of coverage (i.e. area being covered) and the depth of coverage (the amount of information being transmitted/received) based on the available bandwidth, speed of transmission, as well as its reliability. At the individual level, the ability of infrared and Bluetooth enabled devices and the potentials offered by Radio Frequency Identification tags (RFID) are also increasingly playing an important role in mobile enterprise transitions. Wireless connectivity has further provided opportunities for handheld devices to use computing power of other handheld and stationery devices, leading to the creation of wireless Grids (McKnight and Howison (2004), and Unhelkar (2004). Wireless grids facilitate creation of virtual processors that can be used to deliver far more functionalities than is evident today. For example, wireless grids can provide sufficient computing power for Mobile Internet Agents (as discussed by Subramanium et. al (2004)), enabling them to perform advanced functions like searching and comparing different products and services on handheld devices. Security (as discussed by Godbole (2003) in the context of the Internet) is the next important part of the technical dimension. Mobility provides greater opportunity for unscrupulous ‘tapping’ into the networks, siphoning off data, information and even identities of the users as compared with land-based network. And because many mobileenabled business processes (e.g. in healthcare domain with ambulances) depend heavily on the security of the mobile networks, protection of sensitive corporate data during transmission to and from a mobile devices (channel security and
526
content security) is a vital issue in the technical dimension of MET. Databases have evolved to content management systems (CMS) that are capable of storing audios, videos, photos, graphs and charts and many other content formats that could not be stored in a standard relational database. Technical dimension of MET has to, therefore, consider issues like the type and nature of content, location of contents influencing the speed and security of downloads, duration of content before they are replaced or upgraded, synchronization of contents between the mobile devices and back-end servers, and so on. A mobile process is made up of a diverse range of heterogeneous mobile infrastructures including the hardware, operating system, messaging systems and databases –which all need to be integrated to provide mobile functionalities. Although most business applications today are web-enabled, they also need to be integrated further with mobile applications in order to make them mobile-web-enabled applications. Using SMS and MMS, creation of WiFi hotspots and incorporation of seamless wireless broadband through a single ISP are some practical options in such integrations. Some of the integration challenges for mobile infrastructures include issues such as the movement of the users, mobile nodes, fluctuating demands on the infrastructure (notably networks and databases), changing security needs (depending on the type of information required) and reliability (ability to restart transaction from any point in time). Finally, usability can also be considered as an important technical issue (in addition to being a sociological issue) in MET–especially as more and more functionality is being provided on small screen mobile devices. Usability considerations, so vividly depicted by Constantine and Lockwood (1999), need to be extended further when it comes to mobile devices to ensure information is provided succinctly to the users. Wireless Markup Language (WML) can play an important part in
Creation of a Process Framework for Transitioning to a Mobile Enterprise
removing unnecessary elements from the display of information on mobile devices.
metHodoloGIcAl dImensIon In m-trAnsFormAtIon The methodological dimension of MET primarily deals with the business processes, which can be understood as the “manner in which a business carries out its activities and tasks”. These activities and tasks deal with both external parties and internally with its employees and management. The modelling of these business processes is complicated by the fact that the users (especially customers) have an expanding array of different types and models of mobile gadgets available to them. Not only do organizations making the transition doesn’t have much control on these devices, but they are also faced with the ever increasing expectations by the users that the m-businesses will support their specific gadgets. Whenever a business transitions to a mobile business, three possibilities emerge with respect to its business processes: •
•
•
Firstly, existing business processes are re-engineered to incorporate mobility; the re-engineered processes will be impacted by mobility in terms of location and time independence for the users Secondly, totally new business processes are brought in; as a result to the introduction of a mobile technology, and would otherwise have not existed and Thirdly, some redundant and/or irrelevant business processes are dropped and they do not make sense with the incorporation of mobility.
Each of the aforementioned process changes impact the individual user, as well as the organization and even the business sector as a whole. For example, the manner in which an employee would
fill out his/her time sheets would change with the advent of mobility in the organization. Or the way in which a customer enquires about her balance in a banking environment will change depending on mobility in the process. Organizational processes, primarily dealing with other businesses, but also related to organizational structure and internal management, undergo change when mobility is incorporated in the business. Finally, as discussed by S’duk and Unhelkar (2005), an entire business sector made up of a group or collaboration of businesses could start interacting with each other through wireless applications and networks, resulting in a need to reengineer processes that criss-cross an industrial sector. This may result in the need for industrial process reengineering (IPR). For example, with mobile connectivity, an airline and a car hire company may need to change their business processes together to inform an individual of changes to flight timings on her hired mobile car dashboard; or a hospital, an insurance company and a company providing road side assistance may introduce new business processes that facilitates their collaboration to provide immediate care and support to a mobile subscriber who may have met with an accident. Similarly, advances in wireless grids further increase the opportunities for an industrial segment (or a dynamic group) to create entirely new business processes. For example, the creation of on-the-spot dynamic customer groups at shopping malls and sports venues was not feasible without the mobile connectivity; with this mobile connectivity businesses are able to target a group of customers dynamically. Extending Kanter’s (2003) argument further, for future service growth mobile business processes need to continuously keep the user’s context in mind to be able to fully exploit the dynamicity offered by mobility. Deshpande et.al (2004), in the Device Independent Web Engineering workshop, describe how, with continuing advances in mobile communication capabilities, and dramatic reduction in costs of mobile devices, the demand for Web access from
527
Creation of a Process Framework for Transitioning to a Mobile Enterprise
many different types of mobile devices has now gone up substantially. Incorporation of mobile Internet access has resulted in a paradigm shift in the way businesses and individuals tend to use the Internet. For example, customers, instead of making a call using a mobile phone may use its Internet capabilities to locate good consumer deals or find convenient service locations for their needs. Mobile Internet has also resulted in evolution of the customer relationship management (CRM) systems into what can be called mobile CRM (mCRM) systems (Arunatileka and Unhelkar, 2003). Reengineering of processes also needs to consider the devices that will facilitate the business processes. For example, for corporate solutions, we need to consider whether there are devices already deployed? And if so, can they be reused or are new devices required to be provided for the business processes? Finally, there is also a need to keep the software development methodologies in mind when the MET results in changes to the information systems of the transitioning organization. These methodologies can help in standardization, formal modelling of requirements, user-centered designs, understanding the technology of mobile network architectures, and issues in integrating mobile software with the existing software (Godbole, 2003). These technical process considerations will result in the implementation of good quality mobile applications and, as a result, would improve the overall quality of service offered by the business to its customers and users.
discussed by Devereaus and Johansen (1994), now needs to be further considered in the context of a global-mobile society. This is so because when a business enterprise undergoes a MET, it affects the socio-cultural aspects of both individuals and the groups they are part of. For example, should an individual answer the mobile phone provided by the company during his or her private time? Or should the mobile service provider be allowed to send unsolicited promotional messages to its subscribers? While mobility enables people to be productive anytime and anywhere, the need to separate personal from official work and responsibilities are far greater in today’s mobile society. These social issues related to work ethics and behaviour in an Internet-based society, already studied by Ranjbar and Unhelkar (2003), need to be further extended and applied in the context of mobility in an MET. The social advantages resulting from mobile technology infrastructures that may impact MET. For example, in developing nations (and even otherwise), the infrastructure costs associated with land-based Internet connectivity are far higher than corresponding costs of setting up wireless computing infrastructure. This results in an opportunity to reach people en masse through the relatively cheap mobile devices, and conducting business activities with them. The resulting change in the social landscape of a country is an extremely interesting phenomena that needs to be studied under this third dimension of MET.
socIAl dImensIon In m-trAnsFormAtIon
consIderInG mobIle busIness Internet usAGe And levels In tHe “met” FrAmeWork
Mobility has had a significant impact on the quality of life of individuals and the society in which they live. While the location-aware mobile connectivity has dramatically increased the ability of individuals to communicate, it has also produced challenges in terms of privacy and new social protocols. The effect of globalization, as
528
Mobile Internet usage has been discussed by Unhelkar (2003b). This usage describes the increasingly complex application of mobile internet by businesses in informative, transactive, operative and collaborative manners. The mapping of this usage with the four dimensions of the MET frame-
Creation of a Process Framework for Transitioning to a Mobile Enterprise
Table 1. Mobility considerations in mobile Internet usage by business (based on Unhelkar, 2008) M-Informative
M-Transactive
M-Operative
M-Collaborative
Economic (Why)
Costs; Nuisance;
Profit Sharing; Alliance formation;
Costs; Inventory;
Trust; Legal mandates
Technology (What)
Device Availability and Access
Networking – Internet connectivity; Reliability; Security;
Intranet; Extranet; Groupware; Reliability
Portals; Groupware; Standards and interoperability;
Methodology (How)
Personal Process
Business Process Engineering (BPR)
Organizational Policies; BPR
Industrial Process Reengineering; Business Collaboration
Sociology (Who)
Privacy; Access
Security, Confidence; Convenience;
Security, Trust, Workplace Regulations; Ethics;
Security, Trust, sociocultural issues
work is summarised in Table 1. The subsequent sections describe that usage further from the point of view of understanding the MET.
mobile Informative layer This is the usage of the Internet to merely provide information. As such, this is a one-way transfer of information requiring no security. Example of mobile informative usage include broadcasting of schedules, information of products, services or places, and capitalizing on the common Short Messaging Service (SMS) feature of mobile gadgets. Technically availability and access to a mobile device is important to enable provision of information. However, methodologically, it is the individual’s process (or mode of usage of the device) that influences how the information is received. Socially, the information layer of mobile Internet usage has the potential for degenerating into mobile SPAM and other unsolicited messaging that is a part of MET challenge.
mobile transactive layer During this stage of Internet usage, businesses are more serious in conducting business on the Internet than they are during the informative usage. The transactive usage implies a “two-way” (or more) communication between the business and the user, resulting in sales of products and services as well
as dealing with the third-party credit/debit card organizations that facilitate online payments. Technically, devices such as a wireless enabled PDA or laptop with necessary access and security on the mobile network are required to conduct these transactions. Methodologically, it will usually be a transaction with a user “known” to the business (for example, a registered bank user conducting account transactions), requiring businesses to incorporate upfront registration processes in dealing with their customers. Socially, the convenience of conducting two-way transactions will affect the individuals, but so will their confidence in the systems being used for such transactions.
mobile operative layer This layer deals with moving the core internal business processes (that are typically operational in nature) on to the mobile internet. Common examples of operative business processes that transition to the mobile internet include timesheets and inventory. Thus, mobile technologies may enable a manager using a simple mobile-enabled gadget (phone, pager of PDA) to keep track of employees; or may enable her to keep a tab on the inventory and place a re-order at the right time.
529
Creation of a Process Framework for Transitioning to a Mobile Enterprise
mobile collaborative layer This layer will result in numerous individuals as well as businesses all collaborating to satisfy the needs and demands of numerous other businesses. Groupware and portals are the technical starting points for collaborations. However, methodologically, with the collaborative usage of the mobile Internet, the MET will have to consider reengineering a group of collaborating businesses as against reengineering only a single business. Creation of collaborating business clusters can be an interesting study on its own, as discussed by Unhelkar (2003b), and may be considered in greater detail in MET. Socially, though, collaborative usage introduces challenges in terms of ability of business partners to communicate, trust and work together to satisfy common goals. Sociology, rather than technology, and all the associated socio-cultural issues is at the crux of the collaborative usage of the mobile Internet.
enActInG met Describing a framework is not enough in practice. There is also a need to work out the details of enacting the framework. Thus, enactment of MET is its practical implementation that would bring about mobile transition in a real enterprise. During enactment, elements within the four dimensions of the MET discussed here need to be worked out in greater detail and carried out step-by-step. This execution of MET requires practical project planning and project management. Detailed discussion of the project management aspect of MET is out of scope for this discussion. However, there is a need to manage and contain the exposure to risks to the project as well as to the business itself as a result of MET. A well known approach to reducing risks in MET enactment is to apply it to a pilot project. This pilot project can be created and enacted over a relatively small part of the business. Based on suggestions by Brans (2003) and the
530
practical experiences of the author, enactment of MET pilot should consider at least the following: • •
•
•
•
Planning the pilot over an entire end-toend chain of a small section of the business Identifying the champions within the organization who can demonstrate and use the output of MET in their activities Creating a suite of performance metrics and evaluating the results of the pilot for mobile enterprise transition through a suite of metrics Ensuring proper starting and completion of the pilot by announcement and information to stakeholders Properly time the transition to derive maximum benefit and cause minimum disruption to normal functioning of the business
conclusIon And Future dIrectIons This chapter provides the outline of a framework for transitioning an enterprise to a mobile enterprise. This chapter outlines an orderly approach to mobile transformation make up of the four dimensions. The MET discussed here needs to be further augmented with appropriate people and tools to enable successful enactment of a mobile transition. However, this current outline of an MET framework is laid down with an intention to give directions to businesses incorporating mobility in provision of information, conducting of transactions with external businesses and customers, and also internally in its operations.
reFerences Adam, C., & Katos, V. (2005, June). The ubiquitous mobile and location-awareness time bomb. Cutter IT Journal, 18(6), 20–26.
Creation of a Process Framework for Transitioning to a Mobile Enterprise
Arunatileka, D., & Unhelkar, B. (2003). Mobile Technologies, providing new possibilities in Customer Relationship Management. Proceedings of 5th International Information Technology Conference, Colombo, Sri Lanka, December. Brans, P. (2003). Mobilize Your Enterprise: Achieving Competitive Advantage through Wireless Technology. Upper Saddle River, NJ: HewlettPackard, Pearson Education as Prentice Hall PTR. Constantine, L., & Lockwood, L. (1999). Software for Use: a Practical Guide to Models and Methods of Usage-centered Design, Addison-Wesley. Also see www.foruse.com Deshpande, Y., Murugesan, S., Unhelkar, B., & Arunatileka, D. (2004). Workshop on Device Independent Web Engineering: Methodological Considerations and Challenges in Moving Web Applications from Desk-top to Diverse Mobile Devices. Proceedings of the Device Independent Web Engineering Workshop, Munich. Devereaus, M., & Johansen, R. (1994, Global Work: Bridging Distance, Culture and Time, Jossey-Bass, 38-39 Elliott, G., & Phillips, N. (2004). Mobile Commerce and Wireless Computing Systems, Pearson/Addison-Wesley, Harlow, England. Ginige, A., Murugesan, S., & Kazanis, P. (2001). A Road Map for Successfully Transforming SMEs into E-Businesses. Cutter IT Journal, 14. Godbole, N. (2003). Mobile Computing: Security Issues in Hand-held Devices. Paper presented at NASONES 2003 National Seminar on Networking and e-Security by Computer Society of India. Godbole, N., & Unhelkar, B. (2003). Enhancing Quality of Mobile Applications through Modeling. Proceedings of Computer Society of India’s 35th Convention, December, Indian Institute of Technology, Delhi, India
Kanter, T. (2003, February). Going wireless, enabling an adaptive and extensible environment. Mobile Networks and Applications. ACM Press New York, NY, USA, 8(1), 37–50. Lan, Y., & Unhelkar, B. (2005). Global Enterprise Transitions. Idea Group Publication (IGI press), Hershey, PA. Marmaridis, I. (Makis), & Unhelkar, B. (2005). Challenges in Mobile Transformations: A Requirements modeling perspective for Small and Medium Enterprises. Proceedings of International Conference on Mobile Business, ICMB, Sydney McKnight, L., & Howison, J. (2004). Wireless Grids: Distributed Resource Sharing by Mobile, Nomadic, and Fixed Devices. IEEE Internet Computing, Jul/Aug 2004 issue, http://dsonline. computer.org/0407/f/w4gei.htm (last accessed 19th July, 2004) Ranjbar. M., & Unhelkar, B. (2003). Globalisation and Its Impact on Telecommuting: An Australian Perspective. Presented at IBIM03 - International Business Information Management Conference (www.ibima.org), Cairo, Egypt. S’duk. R., & Unhelkar, B. (2005). Web Services Extending BPR to Industrial Process Reengineering. Proceedings of International Resource Management Association (IRMA) Conference;http:// www.irma-international.org, San Diego, USA. 15th to 18th May. Subramanium, C., Kuppuswami, A., & Unhelkar, B. (2004). Relevance of State, Nature, Scale and Location of Business E-Transformation in Web Services. Proceedings of the 2004 International Symposium on Web Services and Applications (ISWS’04: June 21-24, 2004, Las Vegas, Nevada, USA; http://www.world-academy-of-science.org)
531
Creation of a Process Framework for Transitioning to a Mobile Enterprise
Thai, B., Wan, Seneviratne, A., & Rakotoarivelo, T. (2003). Integrated personal mobility architecture: a complete personal mobility solution. [ACM Press New York, NY, USA.]. Mobile Networks and Applications, 8(1), 27–36. .doi:10.1023/A:1021115610456 Unhelkar, B. (2003a). Process Quality Assurance for UML-based Projects. Boston, MA: AddisonWesley. Unhelkar, B. (2003b) Understanding Collaborations and Clusters in the e-Business World. We-B Conference, (www.we-bcentre.com; with Edith Cowan University), Perth, 23-24 Nov. Unhelkar, B. (2004). Globalization with Mobility. Presented at ADCOM 2004, 12th International Conference on Advanced Computing and Communications, Ahmedabad, India. Unhelkar, B. (2005). Transitioning to a Mobile Enterprise: A Three-Dimensional Framework. Cutter IT Journal, 18(8).
key terms And deFInItIons Mobile Technologies: Are made up of wireless network, devices and contents. Mobile technologies are at the crux of the communications revolution. Economic Dimension of MET: Describes the business reasons for undertaking transformation and includes discussions on costs and competition. Technical Dimension of MET: Describes the technologies for transformation and include devices/gadgets, programming, databases, networking, security and architecture. Methodological Dimension of MET: Deals primarily with the question of “how” to – amongst other things- model and design business processes, approach methodologies and quality in software processes. Social Dimension of MET: Deals with “who” is involved in and influenced by the transformation and typically it includes the users, customers and employees of the business.
Unhelkar, B. (2008). Mobile Enterprise Transition and Management. New York, USA: Taylor and Francis (Auerbach) Publications. This work was previously published in Handbook of Research in Mobile Business, Second Edition: Technical, Methodological and Social Perspectives, edited by Bhuvan Unhelkar, pp. 63-72, copyright 2009 by Information Science Reference (an imprint of IGI Global).
532
533
Chapter 2.13
Development and Design Methodologies in DWM James Yao Montclair State University, USA John Wang Montclair State University, USA Qiyang Chen Montclair State University, USA June Lu University of Houston – Victoria, USA
IntroductIon Information systems were developed in early 1960s to process orders, billings, inventory controls, payrolls, and accounts payables. Soon information systems research began. Harry Stern started the “Information Systems in Management Science” column in Management Science journal to provide a forum for discussion beyond just research papers (Banker & Kauffman, 2004). Ackoff (1967) led the earliest research on management information systems for decision-making purposes and published it in Management Science. Gorry and Scott Morton (1971) first used the term decision support systems (DSS) in a paper and constructed a framework for improving management DOI: 10.4018/978-1-59904-843-7.ch029
information systems. The topics on information systems and DSS research diversifies. One of the major topics has been on how to get systems design right. As an active component of DSS, data warehousing became one of the most important developments in the information systems field during the mid-to-late 1990s. It has been estimated that about 95% of the Fortune 1000 companies either have a data warehouse in place or are planning to develop one (Wixon & Watson, 2001). Data warehousing is a product of business need and technological advances. Since business environment has become more global, competitive, complex, and volatile customer relationship management (CRM) and e-commerce initiatives are creating requirements for large, integrated data repositories and advanced analytical capabilities.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Development and Design Methodologies in DWM
By using a data warehouse, companies can make decisions about customer-specific strategies such as customer profiling, customer segmentation, and cross-selling analysis (Cunningham, Song, & Chen, 2006). To analyze these large quantities of data, data mining has been widely used to find hidden patterns in the data and even discover knowledge from the collected data. Thus how to design and develop a data warehouse and how to use data mining in the data warehouse development have become important issues for information systems designers and developers. This article presents some of the currently discussed development and design methodologies in data warehousing and data mining, such as the multidimensional model vs. relational entityrelationship (ER) model, corporate information factory (CIF) vs. multidimensional methodologies, data-driven vs. metric-driven approaches, top-down vs. bottom-up design approaches, data partitioning and parallel processing, materialized view, data mining, and knowledge discovery in database (KDD).
bAckGround Data warehouse design is a lengthy, timeconsuming, and costly process. Any wrongly calculated step can lead to a failure. Therefore, researchers have placed important efforts to the study of design and development related issues and methodologies. Data modeling for a data warehouse is different from operational database, for example, online transaction processing (OLTP), data modeling. An operational system is a system that is used to run a business in real time, based on current data. An OLTP system usually adopts ER modeling and application-oriented database design (Han & Kamber, 2006). An information system, like a data warehouse, is designed to support decision making based on historical point-in-time and prediction data for complex queries or data min-
534
ing applications (Hoffer, Prescott, & McFadden, 2007). A data warehouse schema is viewed as a dimensional model (Ahmad, Azhar, & Lukauskis, 2004; Han & Kamber, 2006; Levene & Loizou, 2003). It typically adopts either a star or snowflake schema and a subject-oriented database design (Han & Kamber, 2006). The schema design is the most critical to the design of a data warehouse. Many approaches and methodologies have been proposed in the design and development of data warehouses. Two major data warehouse design methodologies have been paid more attention. Inmon, Terdeman, and Imhoff (2000) proposed the CIF architecture. This architecture, in the design of the atomic-level data marts, uses denormalized entity-relationship diagram (ERD) schema. Kimball (1996, 1997) proposed multidimensional (MD) architecture. This architecture uses star schema at atomic-level data marts. Which architecture should an enterprise follow? Is one better than the other? Currently, the most popular data model for data warehouse design is the dimensional model (Bellatreche & Mohania, 2006; Han & Kamber, 2006). Some researchers call this model the data-driven design model. Artz (2006) advocates the metric-driven view, which, as another view of data warehouse design, begins by identifying key business processes that need to be measured and tracked over time in order for the organization to function more efficiently. There has always been the issue of top-down vs. bottom-up approaches in the design of information systems. The same is with a data warehouse design. These have been puzzling questions for business intelligent architects and data warehouse designers and developers. The next section will extend the discussion on issues related to data warehouse and mining design and development methodologies.
Development and Design Methodologies in DWM
desIGn And develoPment metHodoloGIes data Warehouse data modeling Database design is typically divided into a fourstage process (Raisinghani, 2000). After requirements are collected, conceptual design, logical design, and physical design follow. Of the four stages, logical design is the key focal point of the database design process and most critical to the design of a database. In terms of an OLTP system design, it usually adopts an ER data model and an application-oriented database design (Han & Kamber, 2006). The majority of modern enterprise information systems are built using the ER model (Raisinghani, 2000). The ER data model is commonly used in relational database design, where a database schema consists of a set of entities and the relationship between them. The ER model is used to demonstrate detailed relationships between the data elements. It focuses on removing redundancy of data elements in the database. The schema is a database design containing the logic and showing relationships between the data organized in different relations (Ahmad et al., 2004). Conversely, a data warehouse requires a concise, subject-oriented schema that facilitates online data analysis. A data warehouse schema is viewed as a dimensional model which is composed of a central fact table and a set of surrounding dimension tables, each corresponding to one of the components or dimensions of the fact table (Levene & Loizou, 2003). Dimensional models are oriented toward a specific business process or subject. This approach keeps the data elements associated with the business process only one join away. The most popular data model for a data warehouse is the multidimensional model. Such a model can exist in the form of a star schema, a snowflake schema, or a starflake schema. The star schema (see Figure 1) is the simplest data base structure containing a fact table in the center, no redundancy, which is surrounded by a set
of smaller dimension tables (Ahmad et al., 2004; Han & Kamber, 2006). The fact table is connected with the dimension tables using many-to-one relationships to ensure their hierarchy. The star schema can provide fast response time allowing database optimizers to work with simple database structures in order to yield better execution plans. The snowflake schema (see Figure 2) is a variation of the star schema model, in which all dimensional information is stored in the third normal form, thereby further splitting the data into additional tables, while keeping fact table structure the same. To take care of hierarchy, the dimension tables are connected with sub-dimension tables using many-to-one relationships. The resulting schema graph forms a shape similar to a snowflake (Ahmad et al., 2004; Han & Kamber, 2006). The snowflake schema can reduce redundancy and save storage space. However, it can also reduce the effectiveness of browsing and the system performance may be adversely impacted. Hence, the snowflake schema is not as popular as star schema in data warehouse design (Han & Kamber, 2006). In general, the star schema requires greater storage, but it is faster to process than the snowflake schema (Kroenke, 2004). The starflake schema (Ahmad et al., 2004), also known as galaxy schema or fact constellation schema (Han & Kamber, 2006), is a combination
Figure 1. Example of a star schema (adapted from Kroenke, 2004)
535
Development and Design Methodologies in DWM
Figure 2. Example of a snowflake schema (adapted from Kroenke, 2004)
schema or fact constellation) schema can model multiple and interrelated subjects. Therefore, it is usually used to model an enterprise-wide data warehouse. A data mart, on the other hand, is similar to a data warehouse but limits its focus to a department subject of the data warehouse. Its scope is department-wide. The star schema and snowflake schema are geared towards modeling single subjects. Consequently, the star schema or snowflake schema is commonly used for a data mart modeling, although the star schema is more popular and efficient (Han & Kamber, 2006).
cIF vs. multidimensional of the denormalized star schema and the normalized snowflake schema (see Figure 3). The starflake schema is used in situations where it is difficult to restructure all entities into a set of distinct dimensions. It allows a degree of crossover between dimensions to answer distinct queries (Ahmad et al., 2004). Figure 3 illustrates the starflake schema. What needs to be differentiated is that the three schemas are normally adopted according to the differences of design requirements. A data warehouse collects information about subjects that span the entire organization, such as customers, items, sales, and so forth. Its scope is enterprisewide (Han & Kamber, 2006). Starflake (galaxy
Two major design methodologies have been paid more attention in the design and development of data warehouses. Kimball (1996, 1997) proposed MD architecture. Inmon et al. (2000) proposed the CIF architecture. Imhoff, Galemmco, and Geiger (2004) made a comparison between the two by using important criteria, such as scope, perspective, data flow, and so forth. One of the most significant differences between the CIF and MD architectures is the definition of the data mart. For MD architecture, the design of the atomiclevel data marts is significantly different from the design of the CIF data warehouse, while its aggregated data mart schema is approximately the
Figure 3. Example of a starflake schema (galaxy schema or fact constellation) (adapted from Han & Kamber, 2006)
536
Development and Design Methodologies in DWM
same as the data mart in the CIF architecture. MD architecture uses star schemas, whereas CIF architecture uses denormalized ERD schema. This data modeling difference constitutes the main design difference in the two architectures (Imhoff et al., 2004). A data warehouse may need both types of data marts in the data warehouse bus architecture depending on the business requirements. Unlike the CIF architecture, there is no physical repository equivalent to the data warehouse in the MD architecture. The design of the two data marts is predominately multidimensional for both architecture, but the CIF architecture is not limited to just this design and can support a much broader set of data mart design techniques. In terms of scope, both architectures deal with enterprise scope and business unit scope, with CIF architecture putting a higher priority on enterprise scope and MD architecture placing a higher priority on business unit scope. With CIF architecture, the information technology (IT) side tackles the problem of supplying business intelligence source data from an enterprise point of view. With MD architecture, its proponents emphasize the perspective of consuming business unit data. For data flow, in general, the CIF approach is top-down, whereas the MD approach is bottom-up. The difference between the two in terms of implementation speed and cost involves long-term and short-term trade-offs. A CIF project, as it is at enterprise level, will most likely require more time and cost up front than the initial MD project, but the subsequent CIF projects tend to require less time and cost than subsequent MD projects. MD architecture claims that all its components must be multidimensional in design. Conversely, CIF architecture makes no such claim and is compatible with many different forms of business intelligence analyses and can support technologies that are not multidimensional in nature. For MD architecture, retrofitting is significantly harder to accomplish. Imhoff et al. (2004) encourage the application of a combination of the data modeling techniques in the two
architectural approaches, namely, the ERD or normalization techniques for the data warehouse and the star schema data model for multidimensional data marts. A CIF architecture with only a data warehouse and no multidimensional marts is almost useless and a multidimensional data-martonly environment risks the lack of an enterprise integration and support for other forms of business intelligence analyses.
data-driven vs. metric-driven Currently, the most popular data model for data warehouse design is the dimensional model (Bellatreche & Mohania, 2006; Han & Kamber, 2006). In this model, data from OLTP systems are collected to populated dimensional tables. Researchers term a data warehouse design based on this model as a data-driven design model since the information acquisition processes in the data warehouse are driven by the data made available in the underlying operational information systems. Another view of data warehouse design is called the metric-driven view (Artz, 2006), which begins by identifying key business processes that need to be measured and tracked over time in order for the organization to function more efficiently. Advantages of data-driven model include that it is more concrete, evolutionary, and uses derived summary data. Yet the information generated from the data warehouse may be meaningless to the user owing to the fact that the nature of the derived summary data from OLTP systems may not be clear. The metric-driven design approach, on the other hand, begins first by defining key business processes that need to be measured and tracked over time. After these key business processes are identified, then they are modeled in a dimensional data model. Further analysis follows to determine how the dimensional model will be populated (Artz, 2006). According to Artz (2006), data-driven model to a data warehouse design has little future since information derived from a data-driven model
537
Development and Design Methodologies in DWM
is information about the data set. Metric-driven model, conversely, possibly has some key impacts and implications because information derived from a metric-driven model is information about the organization. Data-driven approach is dominating data warehouse design in organizations at present. Metric-driven, on the other hand, is at its research stage, needing practical application testimony of its speculated potentially dramatic implications.
top-down vs. bottom-up There are two approaches in general to building a data warehouse prior to the data warehouse construction commencement, including data marts: the top-down approach and bottom-up approach (Han & Kamber, 2006; Imhoff et al., 2004; Marakas, 2003). Top-down approach starts with a big picture of the overall, enterprise-wide design. The data warehouse to be built is large and integrated. However, this approach is risky (Ponniah, 2001). A top-down approach is to design the warehouse with an enterprise scope. The focus is on integrating the enterprise data for usage in any data mart from the very first project (Imhoff et al., 2004). It implies a strategic rather than an operational perspective of the data. It serves as the proper alignment of an organization’s information systems with its business goals and objectives (Marakas, 2003). In contrast, a bottom-up approach is to design the warehouse with businessunit needs for operational systems. It starts with experiments and prototypes (Han & Kamber, 2006). With bottom-up, departmental data marts are built first one by one. It offers faster and easier implementation, favorable return on investment, and less risk of failure, but with a drawback of data fragmentation and redundancy. The focus of bottom-up approach is to meet unit-specific needs with minimum regards to the overall enterprisewide data requirements (Imhoff et al., 2004). An alternative to the previously discussed two approaches is to use a combined approach (Han
538
& Kamber, 2006), with which “an organization can exploit the planned and strategic nature of the top-down approach while retaining the rapid implementation and opportunistic application of the bottom-up approach” (p. 129), when such an approach is necessitated in the undergoing organizational and business scenarios.
materialized view One of the advantages of data warehousing approach over traditional operational database approach to the integration of multiple sources is that the queries can be answered locally without accessing the original information sources (Theodoratos & Sellis, 1999). Queries to data warehouses often involve a huge number of complex aggregations over vast amount of data. As a result, data warehouses achieve high performances of query by building a large number of materialized views (Bellatreche & Mohania, 2006; Jixue, Millist, Vincent, & Mohania, 2003; Lee, Chang, & Lee, 2004; Theodoratos & Sellis, 1999). One of the common problems related to the materialized views is view selection. It seems infeasible to store all the materialized views as we are constrained by some resources such as data warehouse disk space and maintenance cost (Bellatreche & Mohania, 2006). Therefore, an appropriate set of views should be selected among the candidate views. Studies have shown that materialized view selection problem is proven to be an NP-hard problem (Bellatreche & Mohania, 2006; Lee et al., 2004). Baralis, Paraboschi, and Teniente (1997); Mistry, Roy, Sudarshan, and Ramamritham (2001); and Theodoratos and Sellis (1997) researched on solving the materialized view selection problem, but they considered only the intermediate query results that appeared in the given workloads’ execution plan as candidate materialized views (Lee et al., 2004). As a result, the views were excluded from the candidate view space if they joined the relations not referred to in the queries even though the views could have been used in
Development and Design Methodologies in DWM
the optimal query execution plan (Bellatreche & Mohania, 2006; Chang & Lee, 1999). Lee et al. (2004) developed a solution for identifying the candidate view space of materialization, which demonstrates that the candidate view space can be optimized by join-lossless property, partial-view and union-view. These methods can present better results when the database schema, especially the join relations, gets more complex.
data Partitioning and Parallel Processing Data partitioning is the process of decomposing large tables (fact tables, materialized views, indexes) into multiple small tables by applying the selection operators (Bellatreche & Mohania, 2006). A good partitioning scheme is an essential part of designing a database that will benefit from parallelism (Singh, 1998). With a well performed partitioning, significant improvements in availability, administration, and table scan performance can be achieved. Singh (1998) described five methods of partitioning: (1) hashing algorithm to distribute data uniformly across disks, (2) roundrobin partitioning (assigning a row to partitions in sequence), (3) allocating rows to nodes based on ranges of values, (4) schema partitioning to tie a table to a particular partition, and (5) user-defined rules to allocate data in a particular partition. Bellatreche and Mohania (2006) and Bellatreche, Schneider, Mohania, and Bhargava (2002) on the other hand offer two types of partitioning: horizontal and vertical. In horizontal fragmentation, each partition consists of a set of rows of the original table. In the vertical fragmentation, each partition consists of a set of columns of the original table. Furthermore, horizontal fragmentation can be divided into two versions: primary horizontal partitioning and derived horizontal partition. The former one is performed using predicates that are defined on that table, the later one results from predicates defined on another relation.
Parallel processing is based on a parallel database, in which multiprocessors are in place. Parallel databases link multiple smaller machines to achieve the same throughput as a single, larger machine, often with greater scalability and reliability than single processor databases (Singh, 1998). In a context of relational online analytical processing (ROLAP), by partitioning data of ROLAP schema (star schema or snowflake schema) among a set of processors, OLAP queries can be executed in a parallel, potentially achieving a linear speedup and thus significantly improving query response time (Datta, Moon, & Thomas, 1998; Tan, 2006). Given the size of contemporary data warehousing repositories, multiprocessor solutions are crucial for the massive computational demands for current and future OLAP system (Dehne, Eavis, & Rau-Chaplin, 2006). The assumption of most of the fast computation algorithms is that their algorithms can be applied into the parallel processing system (Dehne et al., 2006; Tan, 2006). As a result, it is sometimes necessary to use parallel processing for data mining because large amounts of data and massive search efforts are involved in data mining (Turban, Aronson, & Liang, 2005). Therefore, data partitioning and parallel processing are two complementary techniques to achieve the reduction of query processing cost in data warehousing design and development (Bellatreche & Mohania, 2006).
data mining As a process, data mining endeavors require certain steps necessary to achieve a successful outcome. A step common to data mining is infrastructure preparation. Without the infrastructure data mining activities simply will not occur. Minimum requirements for the infrastructure are: a hardware platform, database management system (DBMS) platform, and one or more tools for data mining (Marakas, 2003). The hardware platform, in most cases, is a separate platform than that which originally housed the data. The data mining en-
539
Development and Design Methodologies in DWM
vironment is usually a client/server architecture or a Web-based architecture (Turban et al., 2005). To perform data mining, data must be removed from its host environment and prepared before it can be properly mined. This process is called extraction, transformation, and loading (ETL) process, in which the data is scrubbed/cleaned and transformed according to the requirements and then loaded to the hardware platform, usually a data warehouse or data mart (Han & Kamber, 2006; Marakas, 2003; Turban et al., 2005). A DBMS is the fundamental system for the database. Sophisticated tools and techniques for data mining are selected based on mining strategies and needs. A well-developed infrastructure is to be a pre-mining assurance for a successful data mining endeavor. Data mining is the analysis of observational data sets to find unsuspected relationships and to summarize the data in novel ways that are both understandable and useful to the data owner (Hand, Mannila, & Smyth, 2001). Another term that is frequently used interchangeably with data mining is KDD. KDD was coined to describe all those methods that seek to find relations and regularity among the observed data and was gradually expanded to describe the whole process of extrapolating information from a database, from the identification of the initial business objectives to the application of the decision rules (Giudici, 2003). Although many people treat data mining as a synonym for KDD, there are differences between the two. Technically, KDD is the application of the scientific method to data mining (Roiger & Geatz, 2003). Apart from performing data mining, a typical KDD process model includes a methodology for extracting and preparing data as well as making decisions about actions to be taken once data mining has taken place. The KDD process involves selecting the target data, preprocessing the data, transforming them if necessary, performing data mining to extract patterns and relationships, and then interpreting and assessing the discovered structures (Hand et al., 2001). In
540
contrast, data mining is the process of forming general concept definitions by observing specific examples of concepts to be learned. It is a process of business intelligence that can be used together with what is provided by IT to support company decisions (Giudici, 2003). A data warehouse has uses other than data mining. However, the fullest use of a data warehouse must include data mining (Marakas, 2003), which in turn, at an upper level, discovers knowledge for the user.
Future trends Future information systems research will continue with the study of problems in information systems management, including systems analysis and design (Banker & Kauffman, 2004). According to Cunningham et al. (2006), there are no agreed upon standardized rules for how to design a data warehouse to support CRM and a taxonomy of CRM analyses needs to be developed to determine factors that affect design decisions for CRM data warehouse. Enterprises are moving towards building the operational data store, which derives data from enterprise resource planning (ERP) systems, solutions for real-time business analysis. There is a need for active integration of CRM with operational data store for real-time consulting and marketing (Bellatreche & Mohania, 2006). In the data modeling area, to develop a more general solution for modeling data warehouse the current ER model and dimensional model need to be extended to the next level to combine the simplicity of the dimensional model and the efficiency of the ER model with the support of object oriented concepts (Raisinghani, 2000).
conclusIon Several data warehousing and data mining development and design methodologies have been reviewed and discussed, followed by some trends
Development and Design Methodologies in DWM
in data warehousing design. There are more issues in relation to the topic but are limited due to the paper size. Some of the methodologies have been practiced in the real world and accepted by today’s businesses. Yet new challenging methodologies, particularly in data modeling and models for physical data warehousing design, need to be further researched and developed.
reFerences Ackoff, R. I. (1967). Management misinformation systems. Management Science, 14(4), 147–156. doi:10.1287/mnsc.14.4.B147 Ahmad, I., Azhar, S., & Lukauskis, P. (2004). Development of a decision support system using data warehousing to assist builders/developers in site selection. Automation in Construction, 13, 525–542. doi:10.1016/j.autcon.2004.03.001 Artz, J. M. (2006). Data driven vs. metric driven data warehouse design. In J. Wang (Ed.), Encyclopedia of data warehousing and mining (pp. 223-227). Hershey, PA: Idea Group. Banker, R. D., & Kauffman, R. J. (2004, March). The evolution of research on information systems: A fiftieth-year survey of the literature in Management Science. Management Science, 50(3), 281–298. doi:10.1287/mnsc.1040.0206 Baralis, E., Paraboschi, S., & Teniente, E. (1997). Materialized view selection in multidimensional database. In Proceedings of the 23rd International Conference on Very Large Data Bases (VLDB’97) (pp. 156-165). Bellatreche, L., & Mohania, M. (2006). Physical data warehousing design. In J. Wang (Ed.), Encyclopedia of data warehousing and mining (pp. 906-911). Hershey, PA: Idea Group.
Bellatreche, L., Schneider, M., Mohania, M., & Bhargava, B. (2002). PartJoin: An efficient storage and query execution for data warehouses. In Proceedings of the 4th International Conference on Data Warehousing and Knowledge Discovery (DAWAK’02) (pp. 109-132). Chang, J., & Lee, S. (1999). Extended conditions for answering an aggregate query using materialized views. Information Processing Letters, 72(5-6), 205–212. doi:10.1016/S00200190(99)00147-7 Cunningham, C., Song, I., & Chen, P. P. (2006, April-June). Data warehouse design to support customer relationship management analyses. Journal of Database Management, 17(2), 62–84. Datta, A., Moon, B., & Thomas, H. (1998). A case for parallelism in data warehousing and OLAP. Proceedings of the 9th International Workshop on Database and Expert Systems Applications (DEXA’98) (pp. 226-231). Dehne, F., Eavis, T., & Rau-Chaplin, A. (2006). The cgmCUBE project: Optimizing parallel data cube generation for ROLAP. Distributed and Parallel Databases, 19, 29–62. doi:10.1007/ s10619-006-6575-6 Giudici, P. (2003). Applied data mining: Statistical methods for business and industry. West Sussex, England. Gorry, G. A., & Scott Morton, M. S. (1971). A framework for management information systems. Sloan Management Review, 13(1), 1–22. Han, J., & Kamber, M. (2006). Data mining: Concepts and techniques (2nd ed.). San Francisco: Morgan Kaufmann. Hand, D., Mannila, H., & Smyth, P. (2001). Principles of data mining. Cambridge, MA: MIT Press. Hoffer, J. A., Prescott, M. B., & McFadden, F. R. (2007). Modern database management (8th ed.). Upper Saddle River, NJ: Pearson Prentice Hall.
541
Development and Design Methodologies in DWM
Imhoff, C., Galemmco, M., & Geiger, J. G. (2004). Comparing two data warehouse methodologies. (Database and network intelligence). Database and Network Journal, 34(3), 3–9. Inmon, W. H., Terdeman, R. H., & Imhoff, C. (2000). Exploration warehousing: Turning business information into business opportunity. New York: John Wiley & Sons, Inc. Jixue, L., Millist, W., Vincent, M., & Mohania, K. (2003). Maintaining views in object-relational databases. Knowledge and Information Systems, 5(1), 50–82. doi:10.1007/s10115-002-0067-z Kimball, R. (1996). The data warehouse toolkit: Practical techniques for building dimensional data warehouses. New York: John Wiley & Sons. Kimball, R. (1997, August). A dimensional modeling manifesto. DBMS, 10(9), 58–70. Kroenke, D. M. (2004). Database processing: Fundamentals, design and implementation (9th ed.). Saddle River, NJ: Prentice Hall. Lee, T., Chang, J., & Lee, S. (2004). Using relational database constraints to design materialized views in data warehouses. APWeb, 2004, 395–404. Levene, M., & Loizou, G. (2003). Why is the snowflake schema a good data warehouse design? Information Systems, 28(3), 225–240. doi:10.1016/S0306-4379(02)00021-2 Marakas, G. M. (2003). Modern data warehousing, mining, and visualization: Core concepts. Upper Saddle River, NJ: Pearson Education Inc. Mistry, H., Roy, P., Sudarshan, S., & Ramamritham, K. (2001). Materialized view selection and maintenance using multi-query optimization. In Proceedings of the ACM SIGMOD International Conference on Management of Data 2001 (pp. 310-318).
542
Ponniah, P. (2001). Data warehousing fundamentals: A comprehensive guide for IT professionals. New York: John Wiley & Sons, Inc. Raisinghani, M. S. (2000). Adapting data modeling techniques for data warehouse design. Journal of Computer Information Systems, 4(3), 73–77. Roiger, R. J., & Geatz, M. W. (2003). Data mining: A tutorial-based primer. New York: Addison-Wesley. Singh, H. S. (1998). Data warehousing: Concepts, technologies, implementations, and management. Upper Saddle River, NJ: Prentice Hall PTR. Tan, R. B. (2006). Online analytical processing systems. In J. Wang (Ed.), Encyclopedia of data warehousing and mining (pp. 876-884). Hershey, PA: Idea Group. Theodoratos, D., & Sellis, T. (1997). Data warehouse configuration. In Proceedings of the 23rd International Conference on Very Large Data Bases (VLDB’97) (pp. 126-135). Theodoratos, D., & Sellis, T. (1999). Designing data warehouses. Data & Knowledge Engineering, 31(3), 279–301. doi:10.1016/S0169023X(99)00029-4 Turban, E., Aronson, J. E., & Liang, T. (2005). Decision support systems and intelligent systems (7th ed.). Upper Saddle River, NJ: Pearson Education Inc. Wixon, B. H., & Watson, H. (2001). An empirical investigation of the factors affecting data warehousing success. MIS Quarterly, 25(1), 17–41. doi:10.2307/3250957
key terms And deFInItIons Dimensional Model: A dimensional model contains a central fact table and a set of surround-
Development and Design Methodologies in DWM
ing dimension tables, each corresponding to one of the components or dimensions of the fact table. Dimensions: Dimensions are the perspectives or entities with respect to which an organization wants to keep records (Han & Kamber, 2006, p. 110). Entity-Relationship Data Model: An entityrelationship data model is a model that represents database schema as a set of entities and the relationships among them. Fact Table: A fact table is the central table in a star schema, containing the names of the facts, or measures, as well as keys to each of the related dimension tables. Knowledge Discovery in Databases (KDD): KDD is the process of extrapolating information from a database, from the identification of the initial business aims to the application of the decision rules (Giudici, 2003, p. 2).
Materialized View: Mmaterialized view are copies or replicas of data based on SQL queries created in the same manner as dynamic views (Hoffer et al., 2007, p. 298). Metric-Drive Design: Metric-drive design is a data warehousing design approach which begins by defining key business processes that need to be measured and tracked over time. Then they are modeled in a dimensional model. Parallel Processing: Parallel processing is the allocation of the operating system’s processing load across several processors (Singh, 1998, p. 209). Star Schema: Star schema is a modeling diagram that contains a large central table (fact table) and a set of smaller attendant tables (dimension tables) each represented by only one table with a set of attributes.
This work was previously published in Encyclopedia of Decision Making and Decision Support Technologies, edited by Frederic Adam and Patrick Humphreys, pp. 236-244, copyright 2009 by Information Science Reference (an imprint of IGI Global).
543
544
Chapter 2.14
Facilitating Design of Efficient Components by Bridging Gaps Between Data Model and Business Process via Analysis of Service Traits of Data Ning Chen Xi’an Polytechnic University, China
AbstrAct In many large-scale enterprise information system solutions, process design, data modeling and software component design are performed relatively independently by different people using various tools and methodologies. This usually leads to gaps among business process modeling, component design and data modeling. Currently, these functional or non-functional disconnections are fixed manually, which increases the complexity and decrease the efficiency and quality of development. In this chapter, a pattern-based approach is proposed to bridge the gaps with automatically generated data access components. Data access rules and patterns are applied to optimize these data access compoDOI: 10.4018/978-1-60566-330-2.ch007
nents. In addition, the authors present the design of a toolkit that automatically applies these patterns to bridge the gaps to ensure reduced development time, and higher solution quality.
IntroductIon With the development of information technology, enterprise information becomes more complex and tends to change more frequently; consequently enterprise should adjust its business according to market, which requires enterprise IT system to be flexible and agile enough to response to the changes. Now, business process modeling consists of service modeling, data modeling and component modeling, which are the three main threads in enterprise IT system solution design (Ivica, 2002;
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Facilitating Design of Efficient Components by Bridging Gaps Between Data Model and Business Process
Mei, 2003). They are usually performed relatively independently, for different roles employ different methodologies. The result is in a gap among process model, data model and components, which requires significant amount of efforts to fill in the gap. Enterprise information system is an application with dense data (Martin, 2002) and mass data access. Both functional and non-functional aspects, such as system response time and data throughput etc., are satisfied in system integration in order to provide efficient data access within process execution. Meeting these requirements is a challenge presented to the solution designed, which will greatly affect the efficiency of system development. Therefore, how to build the relationship model between business process and data model, and how to use the orchestration model to automatically generate data access components are two questions that have great impact to software development.
relAtIon Works The existing enterprise modeling approaches are focused on two domains including peer-topeer enterprise system and multilayer Enterprise Modeling. David (2004) presents a loosely coupled service-composition paradigm. This paradigm employs a distributed data flow that differs markedly from centralized information flow adopted by current service integration frameworks, such as CORBA, J2EE and SOAP. Distributed data flows support direct data transmission to avoid many performance bottlenecks of centralized processing. In addition, active mediation is used in applications employing multiple web services that are not fully compatible in terms of data formats and contents. Martin Fowler and Clifton Nock summarize customary patterns of enterprise application architecture to accelerate development of enterprise modeling (Martin, 2002; Clifton, 2003).
However, the existing enterprise modeling methods remain largely unharnessed due to the following shortages: (1) They lack the automation of analysis mechanism which makes the enterprise unresponsive to the enterprise changes and increases the maintaining overhead of the evolution of these models; (2) Some enterprise models are just conceptual models and should be analyzed by hand. Others employ the complex mathematical models for analysis, which are hard for the business users to comprehend and manipulate. (3) The knowledge reuse is difficult for the business users due to the heterogeneity of the enterprise models. In order to tackle the above problems, through deep analysis of business process modeling and data modeling, we extract process data mapping and data access flow to build data access components for bridging business process and data model. Furthermore, a pattern is automatically applied to data access component for facilitating an efficient service.
Process/dAtA relAtIonsHIP model In present environment for software development, different tools are used by separate roles in business process modeling, data modeling, software component designing and coding. These tasks are so independent that the whole software development becomes rather complex. Take the IBM develop studio as an example, we need to use modeling and programming tools such as WBI-Modeler, Rational Software Architect and WSAD-IE (Osamu, 2003). The development procedure contains the following steps: 1. 2.
The analyst will analyze requirement to design the use case using UML. By analyzing the relationship between enterprise entities, the data model designer will design the data model, and create the database on the basis of UML.
545
Facilitating Design of Efficient Components by Bridging Gaps Between Data Model and Business Process
3. 4.
5.
6.
7.
The process analyst will design the abstract business process using WBI Modeler. The software engineer will design the functions and APIs for components using RSA. The developer will implement the software components and access the database using RSA. The process developer will develop the executable process based on the abstract process model, and assemble the software components as web services using WSADIE. The system deployer will run the developed business model (EAR file) on WBI-SF.
Obviously, a good command of process orchestration, OOD and UML is a prerequisite for a designer to complete the solution design. The figure1 presents the relationships among design flows of business process, software components, and database.
dAtA Access comPonent Identification of Frequent Activity Business process provided much global information on the whole. Not only can these information be used for developer to generate the data access
component, but also these information can be used developer to analyze the process and data relationship model, and consequently for developer to optimize data access activity, produce approximate index for data model, create views and apply data access patterns (Clifton, 2003), which can enhance the data access performance (Figure 1). A business process usually contains some sub processes, and a sub process usually contains some activities, where activity is operation on data. Let {P1, P2, P3, …Pr, …} be a process set in process model. Definition 1: Let a set PS contain some sub-processes, which are processes directly or indirectly invoked by a business process P , as denoted by PS = {Pr }r ∈I , where I is index set. Pr is invoked in process P N r times, the frequency of process Pr is defined as N r / ∑ N r . r ∈I
Definition 2 If the kth data-access-activity in sub process Pr is invoked nr ,k times, the activity frequency
of sub-process Pr is defined by nr ,k / ∑ nr ,k , k ∈I where I is index set. Definition 3 The frequency of activity ar ,k is defined as the ratio of times of data access to the total access, i.e. nr ,k / ∑ ∑ nr ,k . Pr ∈PS k ∈I
Figure 1. Map relationships between process service and data service
546
Facilitating Design of Efficient Components by Bridging Gaps Between Data Model and Business Process
Definition 4 Defined as activity with frequency of activity greater than frequent-querying-activity threshold MAXSearchAF . Definition 5 Defined as activity with frequency of activity greater than frequent-updating-activity threshold MAXUpdateAF . Frequency of activities can be computed by traversing data access flow, and frequent querying activity and frequent updating activity can be identified based on rules.
Automatic Application of cache Pattern We can represent the optimization with rules. According to the customized threshold, frequent data access activities can be selected, and then rule-analyzing system can use rules to recommend approximate data access patterns. The performance index and user interface can be added to identify cache pattern. Box 1 shows a strategy of configuration of data access. Knowledge base of cache pattern strategy stores the criterion how to apply cache patterns, and recommends corresponding configuration of cache pattern and cache parameters according to activation information and different querying condition capacity provided by user. The rules for selection of proper cache pattern and configuration of cache parameter are shown in Box 2 and 3.
Box 1. Algorithm: application strategy of cache pattern Input: data access flow Output: cache pattern of data access Step1: Analyze the data access flow, then find all frequent querying activity, which form the activity set T; Step2: For all activity a∈T, given the corresponding data model D by user interface, if the activation of D is 0, static cache pattern is applied; otherwise, if the activation of D is not equal to 0(0name == name])
Figure 13. UML model for the job package
697
Towards a Model-Centric Approach for Developing Enterprise Information Systems
Figure 14. Joining values of attributes
The MDA-based tool used (Visual Enterprise, 2009) has an interesting feature that links the value of an attribute to a set of potential values. As shown in Figure 13, attribute category of class JobDescription is prefixed by the sign “/”. This sign shows that the value of attribute category can be selected from a list of all instances of class JobCategory already created in the system. Figure 14 shows the formula for calculating the value of attribute category of class JobCategory using the association jobCategory. The Figure 15. The navigation model
698
list of potential values of job categories is created by constraining the domain of association jobCategory to contain only the job categories created in the system. Thus, the values for this domain are defined by TheJobBoard.jobCategories that returns a collection of job categories created in the system. Use cases are implemented as processes. Figure 15 show how processes can be ensemble to create a navigation model that will allow users to launch different functionalities provided by the system.
Towards a Model-Centric Approach for Developing Enterprise Information Systems
Persistence in the MDA Approach
WEb sErVIcEs
One of the advantages of using the MDA approach for developing complex enterprise systems is that the persistence of data is done automatically. As previously mentioned, MDA tools interpret the UML model and automatically generate the database schema for the corresponding model. Figure 16 shows the database schema generated by Virtual Enterprise tool (Visual Enterprise, 2009) for the UML model presented in Figures 11, 12 and 13. The user has only to define the name of the database and some other parameters and the database and the functionalities for storing the data onto the database are generated by the MDA tool. Note that the MDA tool does not normalize the model; therefore it is the user’s responsibility to design a UML model that does not require normalization.
A new approach that is currently gaining strong support in the software industry is based on considering enterprise solutions as federations of services connected via well-defined contracts that define their service interfaces. The resulting system designs are often referred to as Service Oriented Architectures (SOAs) (Brown, Conallen & Tropeano, 2005, pp. 403-432; Johnston & Brown, pp. 624-636). The technology used for developing webservices is not relevant to the service a system provides to its users/clients. The view of the service should be totally independent from its implementation in some underlying technologies. As previously mentioned in this paper, the main characteristics of the MDA approach is the independence of the model describing the
Figure 16. Database schema generated from the UML model
699
Towards a Model-Centric Approach for Developing Enterprise Information Systems
business from its implementation in underlying technologies. Therefore, using the MDA approach to design and develop web-services will provide the same benefits; independence, flexibility and speed in software development. MDA provides an environment to design Web services at a more abstract level than that of technology-specific implementations. The technologies implementing Web services depend on Port 80. The fact that developers of Web services program directly in these technologies makes software development vulnerable to rapid obsolescence and is also far too labor-intensive. Systems built using MDA exhibit more flexibility and agility in the face of technological change—as well as a higher level of quality and robustness, due to the more formal and accurate specification of requirements and design.
MDA and ErPs Enterprise Resource Planning (ERP) systems are complex solutions for managing the multi-facet aspect of modern enterprises. Initially, ERPs were used only in the context of large corporations. Currently, there is a stronger effort to use ERPs by small and middle size companies to develop their core information technology systems (van Everdingen Y., van Hillergersberg & Waarts, 2000). There is a wide spread opinion in the industry that ERPs can be seen as a viable alternative to custom application development for the standard management needs (Dugerdil & Gaillard, 2006). Besides numerous advantages of using ERPs as a basis for the IT needs, there are several major disadvantages that are a serious obstacle for using ERPs in a large scale by small and medium enterprises. One major issue is the inherent complexity of ERPs; often their implementation requires a large dependency on the ERP provider that most of the managers would like to avoid at any cost. Other consequence of the inherent complexity of ERPs is that most of the clients trying to implement them do not have a deep understanding of
700
the ERP solution and therefore large amounts of money must be invested in consultancy for ERP providers. A solution to these problems is the use of the MDA approach to configure functionalities provided by an ERP system. MDA provides enterpriser managers with a high level model, totally independent of ERP’s target platform, at the level of business processes and expressed using visual tools. The UML extension proposed by (Ericsson & Penker, 2000) closely matches business modeling standards and complements the UML Business Modeling Profile published by IBM (Johnston, 2004). One of the most relevant efforts to use the MDA approach in a context of an ERP implementation is presented by (Dugerdil & Gaillard, 2006). They have used Eriksson-Penker’s UML profile (Ericsson & Penker, 2000) and implemented it using IBM’s XDE modeling tool to create a business modeling environment in the context of an ERP implementation. ERPs provide a large amount of functionalities and their use depends on the particular needs of the business model of a corporation. In order to activate the functionalities of an ERP system, some configuration tables need to be created that will make available the necessary components/ modules. As previously mentioned producing configuration tables is difficult and requires a detailed knowledge of the ERP system and a considerable amount of manual labor. As the ERP’s functionalities to be activated depend on the business model, then it is natural to develop a business model at a high level of abstraction using UML-based visual tools. Constructing a high level business model is also one of the best practices in ERP implementation (Thomas, 2002). The use of the MDA approach when the target system is an ERP requires that the transformations leading a PIM into a PSM, instead of generating code as it is the case in general, they will generate configuration tables that would activate the required ERP modules (Dugerdil & Gaillard, 2006).
Towards a Model-Centric Approach for Developing Enterprise Information Systems
MODEL-DrIVEN bUsINEss INtEGrAtION Business process integration and management (BPIM) is an important element in the continuous efforts enterprises undertake to adjust and transform their business to respond to many challenges of the competition. BPIM solutions should be efficient, fast, reusable, robust and low-cost. Initially integration solutions were focused on connecting systems. The approach used was to provide various Enterprise Application Integration (EAI) adapters that would establish an ad hoc point-to-point connection (Zhu, Tian, T., Li, Sun & al, 2004). Later, the same problem was addressed by creating a hub or a centralized integration engine hosting the integration logic. Such hubs are IBM WebSphere Business Integration (WBI), Microsoft BizTalk Server and BEA Weblogic Integrator to name a few. This solution facilitates the creation, maintenance, and changing of integration logic and provides a better environment managing change in a more flexible and efficient way. Although the creation of hubs facilitated the overall communication among different enterprise modules, it did not address the heart of the problem: is how to design, develop, maintain, and utilize the integration hub for real business cases (Zhu, Tian, T., Li, Sun & al, 2004). Existing solutions were very much based on trying adjusting exiting IT systems to address change in enterprise business. Currently, there is a widely-accepted opinion, (Frankel, 2003; Johnston & Brown, 2005; Kleppe, Warmer & Blast, 2003; Pastor & Juan Carlos Molina, 2007; Sewal, 2003; Thomas, 2002) to name a few, that today’s business integration solutions must be elevated to the design and analysis of high-level business strategy and processes before any implementation by the IT department. Therefore, creating a business model representing the goals and objectives of the enterprise must come first and must determine the way in which
IT systems can be refined or developed. An IT system that does not totally support the business model is inadequate. The creation of UML extension for business modeling (Ericsson & Penker, 2000) creates a homogenous environment where the business model and the supporting software model can be designed and developed using the same formalism. Therefore, there is hope this new environment will help eliminating discrepancies that could exist between the business model and the software model. Any change occurring in the business model will be propagated down to the IT system. We believe that the model-driven approach is the key to solving above-mentioned challenges.
Industrial Use MDA is making a constant progress in the software industry. Away from any hype, MDA gives companies a viable alternative to application development instead of corporate stagnation or offshore (Pastor & Juan Carlos Molina, 2007). Not only the number of MDA-compliant vendors is increasing over the time but most importantly, the number of companies that are using this approach for developing large scale enterprise information systems. Following are some examples of success in using the model driven architecture. A complete list of companies that have this approach successfully can be found in (Object Management Group, 2000). Daimler-Chrysler has used the MDA approach for developing its information system. The results of this experience include (Object Management Group, 2000): • • •
15% increase in development productivity in first year, ROI (Return Of Investment) in less than 12 months, Expected total productivity increase of 30% in second year compared to a nonMDA approach
701
Towards a Model-Centric Approach for Developing Enterprise Information Systems
Lockheed Martin Aeronautics has used the OMG’s MDA to develop the F-16 Modular Mission Computer Application Software. Their goal was to achieve cross-platform compatibility and increased productivity and quality, all in the context of the demanding environment of avionics software development (Object Management Group, 2000). Conquest, Inc., is a premier provider of advanced large-scale systems and software technology solutions to federal and commercial customers. MDA helped facilitate communication by graphically representing vast amounts of data into discrete views that could be reviewed and understood across different organizational groups and business areas. Modeling also encouraged collaboration between groups, helping them identify redundant and non-mission enhancing activities, and driving significant cost savings (Object Management Group, 2000). A more detailed study that provides reasons and advantages of using the MDA approach is presented in (Pastor & Juan Carlos Molina, 2007). Figure 17, borrowed from (Sewal, 2003), presents a convincing list of savings when using the MDA approach. Currently, the use of the MDA approach is rather limited; there is a relatively small number of developers that master this technology at industrial level. Historically in the software industry, there is a well defined gap between developers and modelers/architects and they have very different tasks in the software development process. Modelers design the system and its architecture
and developers translate models into a programming language. The use of MDA narrows the gap existing between developers and modelers and creates the need for a modeler/developer of high level. Modeling is a more abstract activity than coding and therefore, while the market is full of developers in all major technologies (Java, .NET, etc) there is great deficit in high level modelers proficient in UML and its various profiles. Training developers to become modelers is a necessity considering the important savings provided by the use of the MDA approach (Figure 17). The MDA approach offers automation of code generation that includes: •
•
Automatically generating complete programs from models (not just class/methods skeletons and fragments) Automatically verifying models at high level of abstraction (for example, by executing them)
History teaches us that automation is by far the most effective technological means for boosting productivity and reliability. It will take time and efforts for the MDA to widely be accepted and only the future will decide when the MDA will become a mainstream technology.
Figure 17. Savings from using the MDA approach in industry
702
Towards a Model-Centric Approach for Developing Enterprise Information Systems
cONcLUsION This paper describes the concept and the feasibility of using MDA-based tools for designing, developing and implementing Enterprise Information Systems using the MDA approach. The center of the approach is a conceptual model that expresses concepts from the domain problem and their relationships. The conceptual model is built using UML, a standard in the software industry. The model is developed visually and the language used is simple and understandable to programmers and non-programmers alike and therefore it facilitates the dialog with stakeholders. This modeling paradigm is specialist-centric, as it allows for a greater participation of specialist in model construction. The model is constructed conceptually, focusing on the business logic that is familiar to the specialists. Thus, the intellectual investment spent for the design of the business model is preserved as it is not affected by the changes of underlying technologies. The conceptual model is constructed at a higher level of abstraction without considering any implementation or computing platform issues. Therefore, the model is platform independent; it can be implemented in different programming environments and computing platforms. A platform independent model can be transformed into several platform specific models that take into consideration different computing platforms. Once the business logic of the problem is clarified and expressed in the model using a high level of abstraction, then implementation issues can be addressed. Implementation details are applied to the general model by a set of transformations. The model obtained considers a particular implementation environment and therefore is specific to the selected computing platform. A platform specific model contains all the necessary details so that code can be generated automatically. Code can be generated in a number of programming environments such as Java, C#, .NET etc. MDA separates the business model form the implementation technologies and thus, it creates a
better environment to implement the same business model in a newer underlying technology. As an example, the switch from a traditional networking environment into a wireless environment comes without the efforts and pains caused by the process of rewriting the business model using another computing platform. Important efforts are directed to use the MDA approach modeling the business model when the target platform is an ERP. MDA allows for a better and faster update with the latest achievements in software engineering techniques, as once new design patterns are invented, they will be implemented in the MDA tools by venders and therefore the quality of the code obtained at the end of the process will be the same independently of the qualifications of team of developers. As MDA is a relatively new modeling paradigm, it is not well-known in the community of software developers. There is a substantial lack of qualified modelers able to apply at large scale this new modeling paradigm. Currently, there is a wide variety of commercial tools available pretending to apply MDA principles as they are defined by OMG. Some allow users to choose the implementation environment (Pastor & Juan Carlos Molina, 2007) and some have a defined implementation technology (Visual Enterprise, 2009). Different tools provide different level of code generation and vendors of these tools claim to be MDA-compliant. There is no any formalism for checking the compliance with OMG principles. Therefore, there is need to have better collaboration and better standards for what would be an MDA-compliant product.
rEFErENcEs Booch, G. (1999). The unified modeling language user guide. Reading, MA: Addison Wesley. Booch, G., Rumbaugh, J., & Jacobson, I. (1999). The unified modeling language user guide. Reading, MA: Addison Wesley.
703
Towards a Model-Centric Approach for Developing Enterprise Information Systems
Brown, W. A., Conallen, J., & Tropeano, D. (2005). Practical insights into MDA: Lessons from the design and use of an MDA toolkit . In Beydeda, S., Book, M., & Gryhn, V. (Eds.), Model-driven software development (pp. 403–432). New York: Springer. doi:10.1007/3-540-28554-7_18 Dugerdil, P., & Gaillard, G. (2006). Model-driven ERP implementation. In Proceedings of the 2nd international workshop on model-driven enterprise information systems. Ericsson, E. H., & Penker, M. (2000). Business modeling with UML business patterns at work. Needham, MA: OMGPress. Frankel, D. S. (2003). Model driven architecture applying MDA to enterprise computing. New York: Wiley Publishing Inc. OMG Press. Gasevic, D., Djuric, D., & Devedzic, D. (2006). Model driven architecture and ontology development. Berlin: Springer. Guide, M. D. A. V1.0.1. (n.d.). Retrieved 2003, from Object Management Group Web site http:// www.omg.org How systems will be built. (n.d.). Retrieved 2009, from Object Management Group Web site http:// omg.org Johnston, J. K., & Brown, A. W. (2005). A model driven development approach to creating serviceoriented solutions . In Model-driven software development. New York: Springer. Johnston, J. K., & Brown, A. W. (2005). A model driven development approach to creating serviceoriented solutions . In Model-driven software development (pp. 624–636). New York: Springer. Johnston, J. K., & Brown, A. W. (2005). Modeldriven development approach to creating serviceoriented solutions . In Beydeda, S., Book, M., & Gyhn, V. (Eds.), Model-driven software development. New York: Springer.
704
Johnston, S. (2004). Rational UML profile for business modeling. IBM developerworks. Retrieved from http://www.ibm.com/developerworks/rational/library/5167.html. Kennedy Carter Inc. (n.d.). Retrieved 2009, from Kennedy Carter Web site http://www.kc.com Kleppe, A., Warmer, J., & Blast, W. (2003). MDA explained the model driven architecture: Practice and promise. Reading, MA: Addison Wesley. McNeile, A., & Simons, N. (2003). MDA the visison with the hole. White Paper, Metamaxim Ltd. Retrieved from http://www.metamaxim.com. McNeile, A., & Simons, N. (2004). Methods of behaviour modelling a comentary on behaviour modelling techniques for MDA. White Paper, Metamaxim Ltd. Retrieved from http://www. metamaxim.com Mellor, S. J., & Balcer, M. J. (2002). Executable UML, a foundation for model driven architecture. Reading, MA: Addison Wesley. Meyer, B. (1988). Object-oriented software construction. Englewood Cliffs, NJ: Prentice Hall. Meyer, B. (1992). Eiffel: The language. Upper Saddle River, NJ: Prentice Hall. (n.d.). Retrieved 2003, from MDA Guide V1.0.1 Web site: http://www.omg.org Object Management Group. (n.d.). Retrieved 2009, from Object Management Group Web site: http://omg.org Object management group [OMG]. (2009). Retrieved from http://omg.org OMG Model driven architecture: How systems will be built. (2009). Retrieved from http://omg.org OMG/MDA. (2009). How systems will be built. Retrieved 2009, from Object Management Group Web site http://omg.org
Towards a Model-Centric Approach for Developing Enterprise Information Systems
Oscar, P., & Juan Carlos Molina. (2007). Modeldriven architecture in practice. Germany: Papajorgji, P., Clark, R., & Jallas, E. (2009). The Model driven architecture approach: A framework for developing complex agricultural Systems. In P. Papajorgji & P. M. Pardalos (Eds.), Advances in modeling agricultural systems. New York: Springer. (Springer Optimization and Its Applications). Papajorgji, P., & Pardalos, P. M. (2006). Software engineering techniques applied to agricultural systems an object-oriented and UML approach. New York: Springer. Papajorgji, P., & Shatar, T. (2004). Using the unified modeling language to develop soil waterbalance and irrigation-scheduling models. Environmental Modelling & Software, 19, 451–459. doi:10.1016/S1364-8152(03)00160-9 Pastor, O., & Molina, J. C. (2007). Model-driven architecture in practice. Germany: Springer Verlag.
Sewal, S. J. (2003). Executive justification for adopting model driven architecture. Retrieved from http://omg.org/mda/presentations.html Thomas, L. J. (2002). ERP et progiciel de gestion integres. Paris: Dunod. van Everdingen, Y., van Hillergersberg, J., & Waarts, J. (2000). ERP adoption bu european midsize companies. Communications of the ACM, 43(4), 27–31. doi:10.1145/332051.332064 Visual Enterprise. (2009). Retrieved 2009, from http://intelliun.com Warmer, J., & Kleppe, A. (1999). The object constraint language precise modeling with UML. Reading, MA: Addison Wesley. Zhu, J., Tian, Z., T., Li, Sun, W., & al, e. (2004). Model driven business process integration and management. A case study with bank sinoPac regional platform. IBM Journal of Research and development.
This work was previously published in Enterprise Information Systems and Implementing IT Infrastructures: Challenges and Issues, edited by S. Parthasarathy, pp. 140-158, copyright 2010 by Information Science Reference (an imprint of IGI Global).
705
706
Chapter 3.10
Impact of Portal Technologies on Executive Information Systems Udo Averweg Information Services, eThekwini Municipality & University of KwaZulu-Natal, South Africa Geoff Erwin Cape Peninsula University of Technology, South Africa Don Petkov Eastern Connecticut State University, USA
INtrODUctION Internet portals may be seen as Web sites which provide the gateway to corporate information from a single point of access. Leveraging knowledge—both internal and external—is the key to using a portal as a centralised database of best practices that can be applied across all departments and all lines of business within an organisation (Zimmerman, 2003). The potential of the Web portal market and its technology has inspired the mutation of search engines (for example, Yahoo®) and the establishment of new vendors in that area (for example, Hummingbird® and Brio Technology®). A portal is simply a single, distilled view of information from various sources. Portal DOI: 10.4018/978-1-59140-993-9.ch030
technologies integrate information, content, and enterprise applications. However, the term portal has been applied to systems that differ widely in capabilities and complexity (Smith, 2004). A portal aims to establish a community of users with a common interest or need. Portals include horizontal applications such as search, classification, content management, business intelligence (BI), executive information systems (EIS), and a myriad of other technologies. Portals not only pull these together but are also absorbing much of the functionality from these complementary technologies (Drakos, 2003). When paired with other technologies, such as content management, collaboration, and BI, portals can improve business processes and boost efficiency within and across organisations (Zimmerman, 2003). This chapter investigates the
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Impact of Portal Technologies on Executive Information Systems
level of impact (if any) of portal technologies on EIS. It proceeds with an overview of these technologies, analysis of a survey on the impact of Web-based technologies on EIS implementation, and conclusions on future trends related to them.
bAcKGrOUND ON POrtAL tEcHNOLOGIEs AND EIs Gartner defines a portal as “access to and interaction with relevant information assets (information/ content, applications and business processes), knowledge assets and human assets, by select target audiences, delivered in a highly personalized manner” (Drakos, 2003). Drakos (2003) suggests that a significant convergence is occurring with portals in the centre. Most organisations are being forced to revisit their enterprise-wide Web integration strategies (Hazra, 2002). A single view of enterprise-wide information is respected and treasured (Norwood-Young, 2003). Enterprise Information Portals are becoming the primary way in which organisations organise and disseminate knowledge (PricewaterhouseCoopers, 2001). EIS grew out of the development of information systems (IS) to be used directly by executives and used to augment the supply of information by subordinates (Srivihok, 1998). For the purposes of this article, an Executive Information System is defined as “a computerized system that provides executives with easy access to internal and external information that is relevant to their critical success factors” (Watson et al., 1997). EIS are an important element of the information architecture of an organisation. Different EIS software tools and/or enterprise resource planning (ERP) software with EIS features exist. EIS is a technology that is emerging in response to managers’ specific decision-making needs (Turban et al., 1999). Turban (2001) suggests that EIS capabilities are being “embedded in BI.” All major EIS and information product vendors now offer Web versions of their tools, designed to
function with Web servers and browsers (PricewaterhouseCoopers, 2002). With EIS established in organisations and the presence of portal technologies, there is thus a need to investigate the link (if any) between EIS and portal technologies. Web-based technologies are causing a reexamination of existing information technology (IT) implementation models, including EIS (Averweg, 2003). Web-based tools “are very much suited” to executives key activities of communicating and informing (Pijpers, 2001). With the emergence of global IT, existing paradigms are being altered, which is spawning new considerations for successful IT implementation. Challenges exist in building enterprise portals as a new principle of software engineering (Hazra, 2002). Yahoo® is an example of a general portal. Yahoo® enables the user to maintain a measure of mastery over a vast amount of information (PricewaterhouseCoopers, 2001). Portals are an evolutionary offshoot of the Web (Norwood-Young, 2003). The Web is “a perfect medium” for deploying decision support and EIS capabilities on a global basis (Turban et al., 1999). As the usage of IT increases, Web-based technologies can provide the means for greater access to information from disparate computer applications and other information resources (Eder, 2000). Some Web-based technologies include: intranet, Internet, extranet, e-commerce business-to-business (B2B), e-commerce business-to-consumer (B2C), wireless application protocol (WAP), and other mobile and portal technologies. The portal has become the most-desired user interface in Global 2000 enterprises (Drakos, 2003).
sUrVEy OF WEb-bAsED tEcHNOLOGIEs’ IMPAct ON EIs The technology for EIS is evolving rapidly and future systems are likely to be different (Sprague & Watson, 1996). EIS is now clearly in a state of flux. As Turban (2001) notes, “EIS is going through
707
Impact of Portal Technologies on Executive Information Systems
a major change.” There is therefore both scope and need for research in the particular area of EIS being impacted by portal technologies as executives need systems that provide access to diverse types of information. Emerging (Web-based) technologies can redefine the utility, desirability, and economic viability of EIS technology (Volonino, et al., 1995). There exists a high degree of similarity between the characteristics of a “good EIS” and Web-based technologies (Tang et al., 1997). With the absence of research efforts on the impact of portal technologies on EIS implementations, this research begins to fill the gap with a study of thirty-one selected organisations in South Africa which have implemented EIS. A validated survey instrument was developed and contained seven-point Likert scale statements (anchored with (1) Not at all and (7) Extensively) dealing with how an interviewee perceives specific Web-based technologies impacted his organisation’s EIS implementation. The selected Webbased technologies were: (1) intranet; (2) Internet; (3) extranet; (4) e-commerce: business-to-business (B2B); (5) e-commerce: business-to-consumer (B2C); (6) wireless application protocol (WAP) and other mobile technologies; and (7) any other Web-based technologies (for example, portal technologies). The questionnaire was administered during a semi-structured interview process. A similar approach was adopted by Roldán and Leal (2003) in their EIS survey in Spain. The sample was selected using the unbiased “snowball” sampling technique. This technique was also used by Roldán and Leal (2003). The
sample selected included organisations with actual EIS experience with representatives from the following three constituencies: (1) EIS executives/ business end-users; (2) EIS providers; and (3) EIS vendors or consultants. These three constituencies were identified and used in EIS research by Rainer and Watson (1995). A formal extensive interview schedule was compiled and used for the semi-structured interviews. Those were conducted during May-June 2002 at organisations in the large eThekwini Municipal Area (EMA) in the eastern part of South Africa, including Durban, which is the most populous municipality in the country, with a geographic area size of 2,300 km2 and a population of 3.09 million citizens (Statistics South Africa, 2001). The number of surveyed interviewees and associated percentages per constituency for the three EIS constituencies are reflected in Table 1. The respondents in the organisations surveyed reported a wide range of available commercially purchased EIS software tools and/or ERP software with EIS features. These included Cognos®, JDEdwards BI®, Oracle®, Hyperion®, Lotus Notes®, Business Objects®, and Pilot®. Cognos® was the most popular EIS software tool, comprising 60% of the sample surveyed. From the data gathered through the authors’ survey instrument, a tally and associated percentage of the perceived degree to which specific Webbased technologies impacted the respondent’s current EIS implementation in the organisations surveyed is reflected in Table 2. Table 2 shows that only seven (22.5%) organisations surveyed report
Table 1. EIS constituencies and number of interviewees surveyed per constituency Stakeholder groups (constituencies)
Number of interviewees surveyed and associated percentage of total sample
EIS executives/business end-users
20 (64.5%)
EIS providers
7 (22.6%)
EIS vendors or consultants
4 (12.9%)
SAMPLE SIZE
31 (100%)
708
Impact of Portal Technologies on Executive Information Systems
that the Intranet significantly impacted their EIS implementation. Intranets are usually combined with and accessed via a corporate portal (Turban, et al., 2005). The level of impact by the Internet on EIS implementation is slightly lower with six (19.4%) organisations surveyed reporting that the Internet has significantly impacted their EIS implementation. While 24 (77.4%) organisations surveyed report that the Extranet had no impact on their organisation’s EIS implementation, the balance of the data sample (22.6%) reports different degrees of impact. The results in Table 2 show that the vast majority (90.4%) of respondents reports that ecommerce: (B2B) has not impacted EIS implementation in organisations surveyed. A slightly lower result (83.9%) was reported for e-commerce: (B2C). One possible explanation for the e-commerce (B2B) and (B2C) low impact levels is that the software development tools are still evolving and changing rapidly. WAP and other mobile technologies have no (93.6%) or very little (3.2%) impact on EIS implementations. Of the seven Web-based technologies given in Table 2, WAP and other mobile technologies have the least impact (combining “Somewhat much,” “Very much,” and “Extensively”) on EIS implementation in organisations
surveyed. Only one respondent (3.2%) reported that WAP and other technologies had extensively impacted the EIS implementation in her organisation. A possible explanation for this result is that the EIS consultant was technically proficient in WAP technologies. The potential benefits of mobile access to portals are numerous and selfevident. PricewaterhouseCoopers (2002) notes that organisations must first establish the benefits of mobile access to its portal and assess the value of providing those benefits via mobile access to the organisation. According to Table 2, three interviewees reported that their organisation’s EIS implementations were significantly impacted (“Very much” and “Extensively”) by portal technologies. At first this may appear to be noteworthy as the portal technology impact on EIS implementations (9.7%) is higher than that on the extranet (6.5%), e-commerce: (B2B) (6.4%), e-commerce: (B2C) (6.4%), and WAP and other technologies (3.2%) impacts. However, it should be noted that the impact levels of all the Web-based technologies assessed are fairly low. This still means that after the Intranet and Internet, portal technologies have the third highest impact on EIS implementations in organisations surveyed. Combining the results (“Somewhat much,” “Very much,” and
Table 2. Tally and associated percentage of the degree to which specific Web-based technologies impacted respondent’s current EIS implementation The degree to which Web-based technologies impacted respondent’s EIS implementation (N=31) Web-based technology
Not at all
Very little
Somewhat little
Uncertain
Somewhat much
Very much
Extensively
Intranet
17 (54.8%)
Internet
21(67.7%)
2 (6.5%)
2 (6.5%)
0 (0.0%)
3 (9.7%)
4 (12.9%)
3 (9.6%)
1 (3.2%)
1 (3.2%)
0 (0.0%)
2 (6.5%)
3 (9.7%)
3 (9.7%)
Extranet
24 (77.4%)
1 (3.2%)
2 (6.5%)
1 (3.2%)
1 (3.2%)
2 (6.5%)
0 (0.0%)
E-Commerce: (B2B)
28 (90.4%)
1 (3.2%)
0 (0.0%)
0 (0.0%)
0 (0.0%)
1 (3.2%)
1 (3.2%)
E-Commerce: (B2C)
26 (83.9%)
1 (3.2%)
1 (3.2%)
0 (0.0%)
2 (6.5%)
0 (0.0%)
1 (3.2%)
WAP and other mobile technologies
29 (93.6%)
1 (3.2%)
0 (0.0%)
0 (0.0%)
0 (0.0%)
0 (0.0%)
1 (3.2%)
Portal technologies
26 (83.8%)
0 (0.0%)
0 (0.0%)
0 (0.0%)
2 (6.5%)
2 (6.5%)
1 (3.2%)
709
Impact of Portal Technologies on Executive Information Systems
Table 3. Descending rank order of impact levels of Web-based technologies on current EIS implementation Rank
Web-based technology
Tally and level of impact on EIS implementations (N=31)
1
Intranet
10 (32.2%)
2
Internet
8 (25.9%)
3
Portal technologies
5 (16.2%)
4
Extranet
3 (9.7%)
4
E-Commerce: (B2C)
3 (9.7%)
6
E-Commerce: (B2B)
2 (6.4%)
7
WAP and other mobile technologies
1 (3.2%)
“Extensively”) for each of the seven Web-based technologies, Table 3 gives a descending ranking order of the levels of impact of different Web-based technologies on EIS implementations. A tally and associated percentage of the perceived degree to which specific Web-based technologies will impact a respondent’s future EIS implementation is given in Table 4. These are obtained from the data gathered using the authors’ survey instrument. Table 4 reflects that only two (6.4%) organisations surveyed reported that it is unlikely that the Intranet will impact future EIS implementations. The unlikeliness of impact by the Internet on future EIS implementations is somewhat higher (16.1%). While seven (22.6%) respondents indicated that is unlikely that the Extranet will impact
their future EIS implementations, eight (25.8%) respondents were unsure of future impact levels by the extranet. Twelve (38.7%) respondents indicated that it is unlikely that e-commerce: (B2B) will impact future EIS implementations. Almost half (48.4%) of organisations surveyed reported that it is unlikely that e-commerce: (B2C) will impact future EIS implementations. WAP and other mobile technologies have similar (42.0%) unlikely future levels of impact. It is striking to note that 21 (67.7%) respondents indicated that it is Extremely unlikely that portal technologies will impact future EIS implementations. This result (when combined with the “Slightly unlikely” and “Quite unlikely” degrees) rises to 24 (75.2%) organisations surveyed. This finding is somewhat surprising considering that
Table 4. Tally and associated percentage of the expected degree to which specific Web-based technologies will impact respondent’s future EIS implementations The expected degree to which Web-based technologies will impact respondent’s future EIS implementations (N=31) Web-based technology
Extremely likely
Quite likely
Slightly likely
Uncertain
Slightly unlikely
Quite unlikely
Extremely unlikely
Intranet
17 (54.8%)
Internet
12 (38.8%)
7 (22.6%)
3 (9.7%)
2 (6.5%)
0 (0.0%)
1 (3.2%)
1 (3.2%)
6 (19.3%)
5 (16.1%)
3 (9.7%)
1 (3.2%)
1 (3.2%)
3 (9.7%)
Extranet
6 (19.3%)
E-Commerce: (B2B)
3 (9.7%)
7 (22.6%)
3 (9.7%)
8 (25.8%)
0 (0.0%)
1 (3.2%)
6 (19.4%)
9 (29.0%)
4 (12.9%)
3 (9.7%)
2 (6.5%)
4 (12.9%)
6 (19.3%)
E-Commerce: (B2C)
2 (6.5%)
9 (29.0%)
4 (12.9%)
1 (3.2%)
2 (6.5%)
4 (12.9%)
9 (29.0%)
WAP and other mobile technologies
1 (3.2%)
8 (25.8%)
5 (16.1%)
4 (12.9%)
0 (0.0%)
3 (9.7%)
10 (32.3%)
Portal technologies
3 (9.7%)
2 (6.5%)
1 (3.2%)
1 (3.2%)
1 (3.2%)
2 (6.5%)
21 (67.7%)
710
Impact of Portal Technologies on Executive Information Systems
portal technologies currently have the third highest level of impact on EIS implementations in organisations surveyed. An explanation for this finding is that possibly some respondents are not aware of the existence of such technology. Roldán and Leal (2003) report that with the availability of Web-based technologies “together with the need to build something similar to an EIS but focused on all members of the organisation has led to the development of the enterprise information portal (EIP) concept, which, to some extent represents the latest incarnation of EIS.” According to Trowbridge (2000), two elements characterise these systems according to the respondents: EIP “acts as a single point of access to internal and external information” and “gives users access to disparate enterprise information systems.” According to Table 4, combining the positive attitude results (“Extremely likely,” “Quite likely,” and “Slightly likely”) for each of the seven Webbased technologies, Table 5 gives a descending ranking order of the expected degree to which Web-based technologies will impact respondents’ future EIS implementations.
FUtUrE trENDs AND cONcLUsION We may notice three significant trends from the data in Table 5. First, this rank order of impact
levels of Web-based technologies on future EIS implementation matches the current rank order levels of impact of Web-based technologies on EIS implementations (see Table 3). Second, while nearly three quarters (75.2%) of respondents surveyed report that it is unlikely that portal technologies will impact future EIS implementations (see Table 4), seen in the context of the other six Web-based technologies, portals still appear in the top three rankings. This is an important consideration for IS practitioners when planning future EIS implementations. Third, when comparing current and future impact levels of Web-based technologies on EIS, there is a positive impact trend for all Web-based technologies. The largest trend increase is the Intranet rising from 32.2% to 87.1%. As Basu, et al. (2000) report, the use of Web-based technologies in the distribution of information is becoming widespread. These technologies will impact future EIS implementations. The findings of this survey show that while EIS have a significant role in organisations in a large South African metropolitan area, their technological base is not affected considerably by the latest innovations of Web-based technologies, including portals. A potential limitation of the research is the localised sample involved in the investigation, but given the highly developed IT infrastructure of most South African companies, our findings can be cautiously generalised for most other countries. The role of portals is to integrate
Table 5. Descending rank order of impact levels of Web-based technologies on future EIS implementation Rank
Web-based technology
Tally and level of impact on future EIS implementations
1
Intranet
27 (87.1%)
2
Internet
23 (74.2%)
3
Portal technologies
16 (51.6%)
3
Extranet
16 (51.6%)
5
E-Commerce: (B2C)
15 (48.4%)
6
E-Commerce: (B2B)
14 (45.1%)
7
WAP and other mobile technologies
6 (19.4%)
711
Impact of Portal Technologies on Executive Information Systems
potential information to the users. IT developers must be aware of emerging trends in the portal technology market to create systems that will be able to incorporate the latest technological developments and new methods of information delivery and presentation for organisations. As the use of Web-based technologies in the distribution of information in organisations becomes more widespread, it is envisaged that the impact level of portal technologies on future EIS implementations will increase significantly.
rEFErENcEs Averweg, U., Cumming, G., & Petkov, D. (2003, July 7-10). Development of an executive information system in South Africa: Some exploratory findings. In Proceedings of a Conference on Group Decision and Negotiation (GDN2003) held within the 5th EURO/INFORMS Joint International Meeting, Istanbul, Turkey, 7-10 July. Basu, C., Poindexter, S., Drosen, J., & Addo, T. (2000). Diffusion of executive information systems in organizations and the shift to Web technologies. Industrial Management & Data Systems, 100(6), 271–276. doi:10.1108/02635570010320484 Drakos, N. (2003, August 4-6). Portalising your enterprise. Gartner Symposium ITXPO2003, Cape Town, South Africa, 4-6 August. Eder, L. B. (2000). Managing healthcare information systems with Web-enabled technologies. Hershey, PA: Idea Group Publishing. Hazra, T. K. (2002, May 19-25). Building enterprise portals: Principles to practice. In Proceedings of the 24th international conference on Software Engineering, Orlando. Norwood-Young, J. (2003). The little portal that could. In Wills (Ed.), Business solutions using technology platform, 1(4), 14-15.
712
Pijpers, G. G. M. (2001). Understanding senior executives’ use of information technology and the Internet. In Murugan Anandarajan & Claire A. Simmers (Eds.), Managing Web usage in the workplace: A social, ethical and legal perspective. Hershey, PA: Idea Group Publishing. PricewaterhouseCoopers. (2001). Technology forecast: 2001-2003. Mobile Internet: Unleashing the power of wireless. Menlo Park, California. PricewaterhouseCoopers. (2002). Technology forecast: 2002-2004. Volume 1: Navigating the future of software. Menlo Park, California. Rainer, R. K. Jr, & Watson, H. J. (1995). The keys to executive information system success. Journal of Management Information Systems, 12(2), 83–98. Roldán, J. L., & Leal, A. (2003). Executive information systems in Spain: A study of current practices and comparative analysis. In Forgionne, Gupta, & Mora (Eds.), Decision making support systems: Achievements and challenges for the new decade, Chapter 18, 287-304. Hershey, PA: Idea Group Publishing. Smith, M. A. (2004). Portals: Toward an application framework for interoperability. Communications of the ACM, 47(10), 93–97. doi:10.1145/1022594.1022600 Sprague, R. H., Jr., & Watson, H. J. (1996). Decision support for management. Upper Saddle River, NJ: Prentice-Hall. Srivihok, A. (1998). Effective management of executive information systems implementations: A framework and a model of successful EIS implementation. PhD dissertation. Central University, Rockhampton, Australia. Statistics South Africa. (2001). Census 2001 digital census atlas. Retrieved July 5, 2006 from,http://gis-data.durban.gov.za/census/index. html [Accessed on 5 July 2006]
Impact of Portal Technologies on Executive Information Systems
Tang, H., Lee, S., & Yen, D. (1997). An investigation on developing Web-based EIS. Journal of CIS, 38(2), 49–54. Trowbridge, D. (2000). EIP—More profitable for integrators than users? Computer Technology Review, 20(5), 20. Turban, E. (2001). California State University, Long Beach and City University of Hong Kong, USA. Personal Communication, 7 October. Turban, E., McLean, E., & Wetherbe, J. (1999). Information technology for management. New York: John Wiley & Sons. Turban, E., Rainer, R. K., & Potter, R. E. (2005). Introduction to information technology (3rd Ed.). New York: John Wiley & Sons. Volonino, L., Watson, H. J., & Robinson, S. (1995). Using EIS to respond to dynamic business conditions. Decision Support Systems, 14(2), 105–116. doi:10.1016/0167-9236(94)00005-D Watson, H. J., Houdeshel, G., & Rainer, R. K., Jr. (1997). Building executive information systems and other decision support applications. New York: John Wiley & Sons.
KEy tErMs AND DEFINItIONs Executive Information System: A computerised system that provides executives with easy access to internal and external information that is relevant to their critical success factors. Extranet: A private Internet that connects multiple organisations. Intranet: A private Internet for an organisation. Portal: Provides access to and interaction with relevant information assets (information/content, applications and business processes), knowledge assets, and human assets, by select target audiences, delivered in a highly personalised manner. Web-based Technologies: Technologies which are core to the functioning of the World Wide Web. Wireless Application Protocol (WAP): A collection of standards for accessing online information and applications from wireless devices such as mobile phones, two-way radios, pagers, and personal digital assistants. World Wide Web: The universe of networkaccessible information, supported by a body of software, and a set of protocols and conventions (http://www.w3.org/WWW).
Zimmerman, K. A. (2003). Portals: Not just a one-way street. KMWorld, Creating and Managing the Knowledge-Based Enterprise, 12(8), September. Retrieved 27 July, 2007 from http://www.kmworld.com/Articles/PrintArticle. aspx?ArticleID-9496 This work was previously published in Encyclopedia of Internet Technologies and Applications, edited by Mario Freire and Manuela Pereira, pp. 215-221, copyright 2008 by Information Science Reference (an imprint of IGI Global).
713
714
Chapter 3.11
A Voice-Enabled Pervasive Web System with Self-Optimization Capability for Supporting Enterprise Applications Shuchih Ernest Chang National Chung Hsing University, Taiwan
AbstrAct Other than providing Web services through popular Web browser interfaces, pervasive computing may offer new ways of accessing Internet applications by utilizing various modes of interfaces to interact with their end-users, and its technology could involve new ways of interfacing with various types of gateways to back-end servers from any device, anytime, and anywhere. In this chapter, mobile phone was used as the pervasive device for accessing an Internet application prototype, a voice-enabled Web system (VWS), through voice user interface technology. Today’s Web sites are intricate but not intelligent, so finding an efficient method to assist user searching is particularly important. One of these efficient methods is to construct an adaptive Web site. This chapter DOI: 10.4018/978-1-60566-146-9.ch008
shows that multimodal user-interface pages can be generated by using XSLT stylesheet which transforms XML documents into various formats including XHTML, WML, and VoiceXML. It also describes how VWS was designed to provide an adaptive voice interface using an Apache Web server, a voice server, a Java servlet engine, and a genetic algorithm based voice Web restructuring mechanism.
INtrODUctION Mobile phone and Internet brought us to a new era by offering a new way for person to person communication and facilitating companies and their customers in conducting business through electronic commerce (Gulliver, Serif & Ghinea, 2004; Toye, Sharp, Madhavapeddy & Scott, 2005; Roussos, Marsh & Maglavera, 2005). Because
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
of the pervasive nature of empowering people to use it anywhere and anytime, mobile phone is becoming one of the most pervasive devices in the world (Chang & Chen, 2005; Ballagas, Borchers, Rohs & Sheridan, 2006). With the rapid spread of mobile phone devices and the convergence of the phone and the personal digital assistant (PDA), there is an increasing demand for a multimodal platform that combines the modalities of various interface devices to reach a greater population of users. While there is a growing demand for technologies that will allow users to connect to the Internet from anywhere through devices that are not suitable for the use of traditional keyboard, mouse, and monitor (Zhai, Kristensson & Smith, 2005), the constraints of a typical mobile device, such as small screen size, slow speed, and inconvenient keyboard, make it cumbersome to access lengthy textual information (Anerousis & Panagos, 2002). In Taiwan, the penetration rate of mobile phone (104.6%)1 is much higher than the penetration rates of other major telecom services, including local telephone: 58.2%, Internet: 71.3%, and broadband Internet: 68.7% (Institute for Information Industry, 2007). However, the same survey also shows that the utilization rate of accessing Internet from wireless devices is relatively low, with a penetration rate slightly lower than 50%, mainly because the text-based interaction between mobile devices and Web sites is very limited. However, voice interface does not have these limitations, because voice interaction could escape the physical limitations on keypads and displays as mobile devices become ever smaller and it is much easier to say a few words than it is to thumb them in on a keypad where multiple key presses may be needed for entering each letter or character (Rebman, Aiken & Cegielski, 2003). Using voice as a medium to operate mobile devices also enables user’s hands to engage in some other activities without losing the ability to browse the Internet through voice commands (Feng, Sears & Karat, 2006).
According to a study from Telecom Trends International, the number of mobile commerce users worldwide will grow from 94.9 million in 2003 to 1.67 billion in 2008, and the global revenues generated from mobile commerce are expected to expand from $6.86 billion in 2003 to $554.37 billion in 2008 (de Grimaldo, 2004). A report from ZDNetAsia states that more than half of 3G traffic would be voice and voice is still the platform on which our business is run (Tan, 2005). A study reported by the Kelsey Group claims that expenditures for speech-related services worldwide are expected to reach $41 billion by 2005 (The Kelsey Group, 2001). This report also estimates a 60-65% average annual growth rate for voice services globally by 2005, with the U.S. market expected to be 20-25% of this total. A recent example to the continuation of this trend can be illustrated by an outstanding growth (350 percent increase in quarterly revenue) of speech self-service marketplace reported by Voxify, Inc. (Market Wire, 2006). It is believed that the demand for mobile commerce has created a market for voice-enabled applications accessible by mobile phone. Traditionally, Interactive Voice Response (IVR) systems are based on proprietary hardware and software technology, with development and deployment tightly integrated on the same hardware platform (Turner, 2004). This has resulted in high development costs. Non-portable proprietary software cannot be deployed on different platforms and it is also inherently difficult to upgrade or modify (Dettmer, 2003). A multi-modal language is needed to support human-computer dialogs via spoken input and audio output. As an optimum solution, VoiceXML (Voice eXtensible Markup Language), a markup language for creating voiceuser interfaces, bridges the gap between the Web and the speech world by utilizing speech and telephone touchtone recognition for input and prerecorded audio and text-to-speech synthesis (TTS) for output (Larson, 2003). It is based on the World Wide Web Consortium’s (W3C’s) eX-
715
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
tensible Markup Language (XML) and leverages the Web paradigm for application development and deployment. By having a common language, application developers, platform vendors, and tool providers can all benefit from code portability and reuse. Furthermore, to reduce the cost of building and delivery of new capabilities to telephone customers, providing voice access to Web-based applications is an attractive option. VoiceXML makes it possible for companies to write shared business logic once and focus their resources on developing only the specific user interface for each device they support. Due to the above mentioned facts and analyses, a voice-enabled web system (VWS), utilizing the voice user interface technology, was designed and implemented through our project conducted in Taiwan. Voice mobile phone was chosen as the pervasive device for accessing our Internet application prototype, a VWS-based service, for two reasons. Firstly, as mentioned earlier the penetration rate of mobile phone is much higher than the rates of other major telecom services, and mobile phone is associated tightly with people’s daily life. Secondly, the use of speech for input and output is inherent in the minds of mobile phone users. The system implemented in this research has several advantages over systems using other mobile devices such as Palm PDA, BlackBerry, and Pocket PC. For example, VWS users can obtain information through voice instead of looking at the monitor, and VWS eliminates the requirement of keyboard or mouse through the use of voice user interface. Voice channels which differ from the Web have some limitations. For instance, the greatest limitation is that voice channels only support one type of access, acoustic, from a phone. Tomorrow’s voice Web sites will serve up voice user interfaces (VUIs) alongside the graphical user interfaces (GUIs) they serve up today (Teo & Pok, 2003). The primary objective for VUIs must be creating a positive experience for the users. In addition, getting responses promptly is also an important
716
concern for users. A voice Web site may have hundreds of pages. When users visit the pages by a phone, they only depend on the voice to navigate the site. Users must wait for the voice menus by sequences, so they spend relatively longer time using the voice channel than a Web browser in waiting for and/or selecting their desired choices. Unduly designed VUIs will be inefficient and fail to serve users’ needs for promptness. It not only causes users’ dissatisfaction, but also introduces unnecessary inefficiency and redundancy to the system (especially for the voice recognition and synthesis components). Thus, it is necessary to design a voice user interface optimization mechanism in the voice-enabled Web system (VWS) application. The proposed VWS system would apply genetic algorithms (GAs) to find a reasonably good arrangement of the site map in a voice Web site, so the restructured site map will make users’voice browsing experience more responsive. This GA-based VUI optimization mechanism was implemented as a software application so that it can be integrated with the VWS system. The subsequent sections of this chapter are organized as follows. Section 2 provides the backgrounds on voice-enabled Web system, adaptive Web site, and genetic algorithm. Section 3 describes the architecture of the VWS system and how to generate multimodal user-interface pages by using XSLT stylesheet. To illustrate how the GA-based VUI optimization process works, the experiment method together with two simple examples was presented in Section 4. Section 5 covers more comprehensive experiment results, and Section 6 concludes this chapter after the discussions.
LItErAtUrE rEVIEW Voice-Enabled Web system A voice-enabled Web system is a system which provides users a voice channel, such as telephone,
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
to access Web applications. With voice-enabled Web systems, firms can provide desirable voicebased Internet services, such as online customer service, online transaction service, and self-served service, through both the conventional browser interface and the new voice interface. Our voiceenabled Web system combines XML based mark-up languages, automatic speech recognition (ASR), text to speech (TTS), and Web technologies. We use the emerging standard markup language, VoiceXML, which defines a common format to allow people to access Web content via any phone (Larson, 2003). The VoiceXML uses XML tags to represent call flows and dialogs. The development of the VoiceXML standard by AT&T, IBM, Lucent Technologies, and Motorola has led to a proliferation in recent years of voice-enabled Web systems. By using this standard Web-based language, data then can be easily exchanged in voice-enabled Web systems. Voice-enabled Web technology is being deployed in a broad range of industries such as banking and retailing. With the launch of the first “voice portal”, which provides telephone users with speech-enabled access (via the nature language interface) to Web-based information and applications, the voice-enabled Web technology caught people’s attention. It is speculated that various industries will soon adopt it to develop suitable Web system to serve their own business purposes. Internet portal companies such as AOL and Yahoo and other companies like Tellme Networks, Hey Anita, and Internet Speech have been developing voice portals for providing several services. Generally speaking, a voice portal, like an Internet portal, is a single place where content from a number of sources is aggregated. For example, information services such as traffic reports, weather reports, stock quotes, bill inquiry/pay, restaurant/hotel recommendations, department store promotion information, cinema reviews, news, and e-mail can be accessed via voice portal. Recently, an emerging term called “v-commerce” has been used to describe the technology and
its applications related to the users’ activities of navigating voice portals with voice commands. V-commerce examples include the use of speech technology over the telephone in commercial applications such as buying cinema/airline tickets, banking, account transferring, stock trading, and purchasing from mail-order companies (Goose, Newman, Schmidt & Hue, 2000;Yamazaki, Iwamida & Watanabe, 2004).
Adaptive Web site Web users often get lost on the Internet due to its complicated structure and the information overload problem. One of the most important functions of a Web site is to assist users in searching information by using various Web intelligence methods (Li & Zhong, 2004). One of those efficient methods is to construct an adaptive Web site. Adaptive Web sites are sites that automatically improve their organization and presentation by learning from visitor access patterns (Kohrs & Merialdo, 2001). Joachims, Freitag, and Mitchell (1997) initiated an adaptive Web project called WebWatcher, which is a “tour guide” agent for the World Wide Web, and its strategy for giving advice is learned from feedback from earlier tours. WebWatcher uses the paths of people who indicated success as examples of successful navigations. It groups people based on their stated interests rather than customizing to each individual. Perkowitz and Etzioni (2000) focused on the problem of index page synthesis. An index page is a page consisting of links to a set of pages that cover a particular topic at a site. Their goal is to transform the Web site into a better one – a new index page. They assume these groups of pages represent coherent topics in users’ minds, and analyze the Website’s access logs to find groups of pages that often occur together in user visits. Su, Yang, Zhang, Xu, Hu, and Ma (2002) designed an adaptive Web interface based on Web log analysis and Web page clustering. They also tried to improve users’
717
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
performance by introducing index pages that minimize overall user browsing costs. Smith and Ng (2003) presented LOGSOM, a prototype system that organizes Web pages on a self-organizing map (SOM) according to user navigation patterns. They clustered the Web pages according to the users’ navigation behaviors, rather than according to the Web content. Instead of organizing the Web pages according to the words contained in the Web pages, they kept track of the interest of the Web users, and organized the Web pages according to their interest. In this way, the SOM provided by LOGSOM can be updated regularly to reflect the current interest of the Web users. In addition, for the purpose of personalization or recommendation, many different kinds of adaptive Web sites have been explored recently. A simple but common example is that some Web sites allow users to personalize the sites for themselves such as customizing the lists of favorite links. Some other more complicated approaches may use various data mining, Web mining, contentbased filtering, collaborate filtering, and other techniques to offer users personalized or adapted information and services (Chang, Changchien & Huang, 2006; Changchien, Lee & Hsu, 2004; Wang & Shao, 2004). As mentioned in the previous subsection, a voice-enabled Web system provides landline and mobile telephone users a new channel for voice-based Web browsing, which allows users to navigate and traverse the structure of the Web site entirely by voice. In terms of voice browsing or navigating the VUI structure of voice-enabled Web sites, it would be more desirable to reduce users’ navigation time, mainly because of the one dimensional nature of the voice channel which causes the users to spend much time in sequentially listening to various choices. In our research, a simple genetic algorithm (SGA) based voice user interface optimization mechanism, which will be described later in this chapter, was designed to realize an adaptive voice-enabled Web site by automatically adapting the site map
718
(i.e., the Web site structure). The process of our voice user interface optimization is illustrated in Figure 1. According to the needs specified in a time-based or event-driven configuration file, VWS can extract its site map and invoke the VUI optimization mechanism to derive a restructured site map, which is used to substitute the original one for providing users a better browsing experience with a more efficient and more effective voice navigation paths.
Genetic Algorithm: basic concepts Genetic algorithms (GAs), which use randomized search and optimization techniques, are designed to simulate processes in natural system necessary for evolution, that is, the solutions to a problem solved by GAs are derived through an evolutionary process, which is based on a mechanism of natural selection to search for an improved solution that optimizes a given fitness function. As shown in Figure 2, GA begins with a set of randomly created solutions called population. Pairs of solutions are taken and used to produce offspring of next generation, that is, a new population. This is motivated by a hope, that the new population will be better than the old one. In the selection stage, parent solutions (which are selected to produce offspring solutions) are selected according to their fitness – the more suitable they are the more Figure 1. The process of voice user interface optimization
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
chances they get to reproduce. Crossover operates on selected parent solutions to create new (offspring) solutions. The simplest way to do that is to choose randomly some crossover point and copy everything before this point from the first parent and then copy everything after the crossover point from the other parent. Specific crossover made for a specific problem can improve performance of the genetic algorithm. Mutation, which randomly modifies the genetic structures of some members of each new generation, is intended to prevent falling of all solutions in the population into a local optimum of the solved problem. Culling, which takes place each time before going to the next generation, is for updating the population. This iteration/revolution process is repeated until an acceptable or optimum solution is found or until some fixed time limit. In contrast to the above described simple genetic algorithms (SGAs) that rely on the concept of biological evolution, the hybrid genetic algorithms (HGAs), which are based on ideas evolution (Misevicius, 2003; Moscato & Cotta, 2003), apply an improvement procedure to each offspring or mutated individual. Instead of random solutions, HGAs operate with the improved solutions. This Figure 2. The procedure of a genetic algorithm
leads to a more effective algorithm while comparing it with an SGA. HGAs may allow to escape from a local optimization and to find better solutions. SGA is currently used in our initial implementation of the VUI optimization mechanism, and for future research we plan to experiment on various HGA improving approaches, such as ruin and recreate (Misevicius, 2003), tabu search (Moscato & Cotta, 2003), branch and bound (AlKhayyal & Sherali, 2000), and simulated annealing (Kirkpatrick, Gelatt & Vecchi, 1983).
systEM ArcHItEctUrE Our VWS prototype was implemented using open technologies including eXtensible HyperText Markup Language (XHTML), XML, eXtensible Stylesheet Language for Transformations (XSLT), VoiceXML, MySQL database, Apache Web server, Apache Tomcat application server, and various Java APIs, such as: Java Servlet, Java Server Page (JSP), Java Database Connection (JDBC), Java Cryptography Extension (JCE), and others. Not only is Java suggested as the “write once, run everywhere” computer language in writing application for various smart phones (Chang & Chen, 2005), but Java’s modular nature allows it to expand and develop solutions for new computational problems. It has been evolving from a popular client applet language to a cross platform GUI builder and an application server platform. This same modular nature now allows Java to drive wireless and multimodal applications. Java 2 Platform, Micro Edition (J2ME) is designed for nonbrowser-based devices and it is not exactly a subset of Java 2 Platform, Standard Edition (J2SE) (Sun Microsystems, 2004). J2ME keeps some of the J2SE core library application programming interfaces (APIs), but substitutes others with lightweight components through the javax.microedition package. As shown in Figure 3, a multimodal application architecture, which offers new ways of accessing web applications
719
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
Figure 3. The multimodal approach for supporting voice-enabled web applications
from any device at any location, was adopted in this study by utilizing various modes of interfaces to interact with end users (Chang and Minkin, 2006). In addition to the conventional browser interface and the targeted voice interface, our multimodal web system approach also provides the ability to access web-based information and applications from multiple methods or channels such as a PDA, smart phone, Pocket PC, or BlackBerry. This multimodal approach facilitates the sharing of the business logic and back-end processes in a multiple-tiered application environment, and thus freeing up the time and resources for concentrating on the design and implementation specifics of user interface for each device. Multimodal applications may use both wireless and voice devices. It is obvious from the name of the platform that J2ME supports wireless technologies such as PDA or smart phone. Many J2ME-enabled devices will support a voice channel, and as such may also be used to interact with VoiceXML-based services over the phone voice connection. To build the VWS prototype, a voice server was used as the platform that enabled the creation of voice applications through industry standards, including XML, VoiceXML and Java (Burke, 2001; Rodriguez, Ho, Kempny, Pedreschi, & Richards, 2002). XML facilitates the concept of application integration and data sharing, and en720
ables the exchange of self-describing information elements between computers. In addition to combining XML based mark-up languages, automatic speech recognition (ASR), text to speech (TTS), and Web technologies, our VWS also uses the emerging standard markup language, VoiceXML, which defines a common format to allow people to access Web content via any phone (Chang & Minkin, 2006). There were two options considered in our study for enabling telephony hardware to integrate with the voice server: Intel Dialogicbased voice server system and Cisco telephony platform. The voice server on the Dialogic platform utilizes a specialized telephony card manufactured by Dialogic, which is connected directly to the telephony interface. Calls are then managed by the Dialogic platform to pass incoming calls to the Voice Server application. The other one uses the Cisco telephony platform. The voice server facilitates the deployment of voice applications by interfacing with various voice standards (Rodriguez et al., 2002). The voice server for Cisco utilizes the Voice over IP (VoIP) protocol. Normally the voice server would be configured to work with a Cisco voice router that has a telephony interface connection. When a phone call is made, the voice router will convert the call to VoIP and then redirect the voice packets to the voice server. The system architecture is illustrated in Figure 4.
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
Figure 4. The architecture of the VWS system
When Voice Server starts, VoiceXML browsers start up and wait for calls. Each VoiceXML browser works for one telephone call. When a user places a call to a designated phone number, a computer on the voice site (i.e. the voice server) answers the call and retrieves the initial VoiceXML script from a VoiceXML content server, which can be a Web server located anywhere on the Web. An interpreter on the voice site parses and executes the script by playing prompts, capturing responses, and passing the responses to a speech recognition engine on the voice system. Just as a Web browser renders HyperText Markup Language (HTML) documents visually, a VoiceXML interpreter on the voice site renders VoiceXML documents audibly and allows telephone users to access services that are typically available to Web users. Once the voice system gets all the necessary information from the caller, the interpreter translates them into a request to the VoiceXML content server, i.e. the web server. When the web server receives the request, it returns a VoiceXML page with either a canned response or dynamically generated VoiceXML scripts, containing the information requested by the caller. Responses are passed from the Web server to the voice site via HyperText Transfer Protocol (HTTP). Finally, the
text to speech (TTS) engine, which is a key component of the voice server, converts VoiceXML scripts into speech and delivers the voice responses to the user via telephone channel. The process will continue, simulating a natural language conversation between the caller and the voice server. We decided to use Java servlets to access the database, and used Java Database Connectivity (JDBC) to connect to a relational database, MySQL. The driver that we used was mysql-connector-java-3.1.1-alpha-bin.jar, which is available for free download at http://dev.mysql.com. Java servlets were used for validating login, constructing user request, processing the request from end user, and generating output results in the form of an XML document (Burke, 2001; Rodriguez et al., 2002). Afterwards, XSLT was used to convert XML documents (generated by Java servlets) into XHTML documents, VoiceXML documents, and WML (Wireless Markup Language) decks to suit different devices. XSLT can be used to perform additional tasks within an application that uses XML as its main data representation model (Burke, 2001). The voice server, which contains the voice recognition and the synthesis engines used to automate the conversation between the site and the
721
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
caller, is set up between the phone and the web server to interpret the VoiceXML documents and act as a middleware processor. Any web site can be a VoiceXML content server. Services provided by this VWS system can give subscribers access to contents offered by different sources of Internet applications and services through PSTN (Public Switched Telephone Network) telephones, wired or wireless.
EXPErIMENt MEtHOD FOr VUI OPtIMIZAtION One of our research objectives is to find a reasonably good arrangement of the site map in a voice Web site. To serve this need, a simulation approach was designed to experiment with various hierarchically structured site maps (i.e., VUI structures or VUI trees), which were modeled by a tree structure as illustrated in Figure 5. When a user, Joe, uses his mobile phone and makes a phone call to the VWS site, he will reach the root node of the VUI tree and hear a greeting message, which may look like “Welcome to XYZ online service portal. If you would like to get a stock quote please say ONE, for restaurant reservation please say TWO, for weather report please say THREE, …” Joe may say THREE and traverse the VUI tree to N13, the node at the next level for weather report. For the moment, Joe will be answered with more choices such as: for domestic weather report please say ONE, for other countries
in Asia please say TWO, for European countries please say THREE, ...,” and Joe may opt to say TWO and traverse to N28 of the VUI tree shown in Figure 5. The interactions between Joe and the VWS service continue until Joe navigates to a leaf node of the VUI tree. At this point, Joe can finally listen to the weather information of his interest. Instead of formally describing the VUI optimization mechanism, we would like to show it first by a simple example, which can easily illustrate not only the principle and the potential improvement of the VUI optimization process but also the model used in the optimization experiments.
An Example of VUI Optimization In this simulation example, root node is at level 0, and Ni,j is the jth node at level i. Node access time is the time length of audio heard by users during each visit to the node. Node access count is the frequency users navigate through and access to the node in a specific term. Leaf node access time is the total access time users navigate from Root to the leaf node. Total time is the summations of every leaf node access time multiply leaf node access count. In a voice site, users’ destination must end at leaf nodes; therefore, we focus on the leaf nodes in our model. It means only leaf nodes have the property of “access counts.” The properties of the nodes used in this example are described in Table 1. In this case, the values of node access time are randomly created in the range between
Figure 5. A VUI tree for modeling the hierarchically structured site map
722
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
Table 1. The properties of the nodes in the VUI tree shown in Figure 5. Node
Root
Node access time
5
N1,1 6
N1,2
N1,3
5
N2,1
4
N2,2
5
N2,3
3
N2,4
6
N2,5
8
N2,6
5
N2,7
4
N2,8
7
4
N2,9
N2,10
8
3
Leaf node access time
n/a
n/a
n/a
n/a
16
19
25
24
29
33
27
31
39
42
Node access count
n/a
n/a
n/a
n/a
70
100
40
40
60
100
70
100
50
60
In this case, the total time = 16*70 + 19*100 + 25*40 + 24*40 + 29*60 + 33*100 + 27*70 + 31*100 + 39*50 + 42*60 = 1,120 + 1,900 + 1,000 + 960 + 1,740 + 3,300 + 1,890 + 3,100 + 1,950 + 2,520 = 19,480 (seconds) -------- (1)
3 and 8 seconds, the values of node access count are only applicable to leaf nodes and randomly generated between 40 and 100, and each value of leaf node access time is derived from the values of node access time of all nodes on the path from the root navigating to that particular leaf node. For example, the leaf node access time of N2,1 is the summation of node access time values of the root, N1,1, and N2,1 (i.e., 5 + 6 + 5 = 16), and the leaf node access time of N2,6 is the summation of node access time values of the root, N1,1, N1,2, N2,4, N2,5, and N2,6 (i.e., 5 + 6 + 5 +8 + 5 + 4 = 33).
Using the VUI optimization mechanism, we can restructure the VUI tree and calculate the new total time. The new tree structure is shown in Figure 6, in which the nodes annotated with star marks [*] were restructured, and the properties of the nodes of the restructured tree are described in Table 2. Note that while the values of node access time and node access count on all nodes are unchanged, the values of leaf node access time on some leaf nodes are changed because the VUI tree is restructured.
Figure 6. A restructured VUI Tree [Nodes with star mark (*) were restructured.]
Table 2. The properties of the nodes in the restructured voice Web site tree Node
Root
N1,1
N1,2
N1,3
*N2,2
*N2,1
*N2,3
*N2,6
N2,5
*N2,4
*N2,8
*N2,10
N2,9
*N2,7
Node access time
5
6
5
4
3
5
6
4
5
8
4
3
8
7
Leaf node access time
n/a
n/a
n/a
n/a
14
19
25
20
25
33
24
27
35
42
Node access count
n/a
n/a
n/a
n/a
100
70
40
100
60
40
100
60
50
70
In the new condition, the total time = 14*100 +19*70 + 25*40 + 20*100 + 25*60 + 33*40 + 24*100 + 27*60 + 35*50 + 42*70 = 1,400 + 1,330 + 1,000 + 2,000 + 1,500 + 1,320 + 2,400 + 1,620 + 1,750 + 2,940 = 17,260 (seconds) -------- (2)
723
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
In this sample case of using our optimization model, the improvement (time decreases) is about 11.40% ([(1) – (2)] / (1) 0.1139), even though this is just a simple illustration. As voice pages increase numerically and calculations are adopted in computerized computations, it is expected that the improvements, achieved by GA-based optimization approaches for the proposed voice-enabled Web system, will be very attractive and better than the result of this illustration.
Genetic Algorithm for VUI Optimization The VUI optimization process of restructuring a voice-enable Web site can be modeled by a tree structure (i.e., the VUI tree) as shown in Figure 5. For each VUI optimization experiment, our simulation program will create a VUI tree, with the values of node access time of all nodes and the values of node access count of all leaf nodes randomly generated. These values will remain unchanged throughout the entire GA evolution process. However, since each value of leaf node access time of a leaf node is derived from the values of node access time of all nodes on the path from the root navigating to that particular leaf node, it may change when the VUI structure changes. As illustrated in the previous example, each possible solution of this problem can be represented by a VUI tree, and the fitness function of this GA-based optimization is the total time, which is the summations of every leaf node access time multiply leaf node access count. Thus, the objective of our optimization process is to minimize the fitness function: n
F (X ) = ∑ LeafNodeAccessTime (i ) × LeafNodeAccessCount (i ) i =k
The VUI tree of a voice-enabled Web site can be restructured to represent alternative Web site maps offering exactly the same contents and
724
services. While there are many ways to restructure the VUI tree, in our initial design of the VUI optimization experiment we only consider the most straightforward approach of rearranging the sequence of choices under every nonleaf node. Other approaches, such as the operation of node promotion/demotion (by moving some nodes to higher/lower levels of the VUI tree), and the creation of extra links (for providing short-cuts to VUI tree navigation), may be considered in other experiments conducted in the future. Changing the order of child nodes for every nonleaf node can create many different restructured VUI trees, and every VUI tree represents a possible solution. Thus, for a VUI tree with hundreds of nodes, the potential number of restructured VUI trees can be very high. For example, let’s consider a VUI tree with 200 nonleaf nodes and assume each nonleaf node has six child nodes (i.e. there are 6 choices available on each non-leaf node), then there could be potentially up to (6!)200 = 2.9275 * 10571 restructured VUI trees, or possible solutions to be considered in our problem of VUI optimization. This permutation procedure is randomly applied to all non-leaf nodes of the initial VUI tree to generate nine more trees, and all these ten trees are put into the population for subsequent evolutions. In our genetic algorithm, every nonleaf node is encoded by the sequence of choices available in that node. For a non-leaf node N randomly selected from tree A, its sequence of choices (or order of child nodes) can be represented as N(A) = 1 2 3 4 5 6 7 8 9, and at the same time, the same non-leaf node N in tree B may have a different sequence of choices represented as N(B) = 4 5 6 9 1 2 7 3 8. The representations of GA variables can also be called chromosomes. Assume that our algorithm randomly selects tree A and tree B from the population, identifies node N for the crossover operation, and then selects two positions, for example, the 3rd and the 5th positions, to define how to crossover chromosomes for producing new chromosomes, that is:N(A) = 1 2
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
3 4 5 6 7 8 9 → crossover → N(A) = 1 2 6 9 1 6 7 8 9N(B) = 4 5 6 9 1 2 7 3 8 → crossover → N(B) = 4 5 3 4 5 2 7 3 8 The mapping relationship of this crossover operation can be identified as:3 ↔ 6 4 ↔ 9 5 ↔ 1 Then, the mapping relationships are applied to the unexchanged part of the chromosomes to create the following chromosomes for their offspring.N’(A) = 5 2 6 9 1 3 7 8 4N’(B) = 9 1 3452768 Mutation may also be applied randomly, at a pre-defined probability rate, to the chromosomes. For example, the chromosome of N’(B) = 9 1 3 4 5 2 7 6 8 can further be encoded into a binary string:N’(B) = 1001 0001 0011 0100 0101 0010 0111 0110 1000 The algorithm can randomly modify some bits to simulate the mutation operation, such as:N’(B) = 1001 0001 0111 0100 01000010 0111 0110 1000 = 9 1 7 4 4 2 7 6 8 The mapping relationship of this mutation operation can be identified as:3 ↔ 7 5 ↔ 4 Again, the mapping relationships are applied to the unexchanged part of the chromosome to create the following new chromosome:N”(B) = 917542368 From the above mentioned population creation, selection, crossover, and mutation operations, new chromosomes can be created and used to generate new VUI trees (solutions). Each newly generated VUI tree is evaluated by the fitness function to decide whether it is a better solution, and this evaluated value is used by the culling operation to decide whether the newly generated VUI tree should be placed into the population for replacing other less qualified VUI tree.
EXPErIMENt rEsULt Our system was implemented on a 1.73 GHz Pentium M 740 laptop PC with 1 GB RAM, and on another 2.8 GHz Pentium 4 desktop PC with 2 GB RAM. Both machines are running Microsoft
Windows XP operating system. The simulation results obtained from 22 experiments are summarized in Table 3, and the simulation results derived from 15 additional experiments are summarized in Table 4. In these experiments, a total of 37 VUI trees were generated with different parameter settings (including the level of VUI tree, the number of children for each nonleaf node, the number of simulation cycles, access time, access count, and so on). Furthermore, the initial total time and the optimized better total time derived in each experiment were used to calculate the improvement by the following straightforward formula:Improvement (%) = (Initial Total Time – Better Total Time) / Initial Total Time Both Table 3 and Table 4 show the experiment results obtained from the aforementioned SGA simulations. As you can see, most cases in these two tables have improvements over 15%, and it is noted that the improvements of some cases even reach nearly 50%. Besides, we noticed that the results of large tree structures are not significant (shown as gray-highlighted items in Table 3 and Table 4), so we tried to increase searching iterations and recalculate them. As shown in Table 5, when we changed searching iterations from 20,000 times to 500,000 times, we found the improvement got a dramatic increase. It means the more searching iterations we give, the more improvement we get. In other words, for large tree structures, we can eventually get satisfied improvements, only if we increase searching iterations.
DIscUssION AND cONcLUsION While pervasive computing continues to affect more and more people in the world, there will be inevitably plenty of opportunity and revolutionary benefits for everyone who participates. The most significant pervasive computing applications have been in the enterprise market rather than the consumer sector; however, the future of
725
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
Table 3. Results of SGA simulations (number of children = fixed; access time = 2 ~ 15; access counts = 1 ~10,000) Level
Number of Children
Iterations
Initial Total Time (s)
Better Total Time (s)
Improvement (%)
2
3
20,000
1,886,608
960,008
49.1
2
4
20,000
5,008,968
2,705,967
46.0
2
5
20,000
9,297,450
5,156,684
44.5
2
6
20,000
14,627,224
9,079,845
37.9
3
3
20,000
7,160,233
3,804,627
46.9
3
4
20,000
25,905,538
16,690,080
35.6
3
5
20,000
65,340,923
45,643,277
30.1
3
6
20,000
155,856,635
109,391,621
29.8
4
3
20,000
33,264,675
23,520,746
29.3
4
4
20,000
139,935,642
107,410,143
23.2
4
5
20,000
431,157,562
331,317,272
23.2
4
6
20,000
1,132,148,584
879,033,227
22.4
5
3
20,000
124,114,791
92,712,893
25.3
5
4
20,000
652,978,855
535,596,066
18.0
5
5
20,000
2,700,049,188
2,450,119,893
9.3
5
6
20,000
8,416,055,793
7,975,813,132
5.2
6
3
20,000
419,713,448
339,456,804
19.1
6
4
20,000
3,194,265,541
2,680,976,660
16.1
6
5
20,000
16,141,177,101
15,988,455,697
1.0
6
6
20,000
60,592,359,070
60,590,742,389
0.0
Table 4. Results of SGA simulations (number of children = varied; access time = 2 ~ 15; access counts = 1 ~10,000) Level
Number of Children
Iterations
Initial Total Time (s)
Better Total Time (s)
Improvement (%)
2
4 to 5
20,000
6,671,516
4,049,286
39.3
2
3 to 6
20,000
5,841,651
3,263,079
44.1
2
2 to 7
20,000
3,803,479
2,071,459
45.5
3
4 to 5
20,000
49,441,465
35,029,192
29.2
3
3 to 6
20,000
34,589,124
24,112,238
30.3
3
2 to 7
20,000
26,332,244
20,114,552
23.6
4
4 to 5
20,000
310,882,476
249067717
19.9
4
3 to 6
20,000
237,910,059
195,086,909
18.0
4
2 to 7
20,000
167,725,927
134,812,794
19.6
5
4 to 5
20,000
1,637,790,586
1,249,947,619
23.7
5
3 to 6
20,000
1,434,332,917
1,258,053,217
12.3
5
2 to 7
20,000
931,088,499
816,875,439
12.3
6
4 to 5
20,000
8,997,595,006
8,892,944,474
1.2
6
3 to 6
20,000
7,832,316,476
7,731,169,501
1.3
6
2 to 7
20,000
4,932,297,039
4,744,909,127
3.8
726
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
Table 5. Results of SGA simulations with more iterations (access time = 2 ~ 15; access counts = 1 ~10,000) Level 5
Number of Children 6
Iterations 500,000
Initial Total Time (s) 8,416,055,793
Better Total Time (s) 7,087,376,995
Improvement (%) 15.8
6
6
500,000
60,592,359,070
53,334,886,394
12.0
6
4 to 5
500,000
8,997,595,006
7,338,143,520
18.4
6
3 to 6
500,000
7,832,316,476
6,768,258,648
13.6
pervasive computing will be supplemented by applications used by a wider variety of professionals and by more horizontal applications. Eventually the access to the conventional desktop and Internet applications through pervasive devices will become very attractive and could lead to pervasive computing being used as much in the consumer sector as it is in the enterprise world. To support various types of pervasive devices in a conventional way, multiple applications have to be independently developed with each to satisfy one type of devices. This practice will exponentially increase the cost, complexity, and manageability of a system when new devices or changes are introduced. To resolve this issue, our project researched on both theoretical concepts of the technologies and practical applications of the concepts by adopting a new software application architecture (see Figure 3) that enables one single application simultaneously interfacing with various types of distributed devices such as PC’s, handheld computers, PDA’s, WAP-enabled wireless devices, phones, and others. This multimodal application architecture overcomes the difficulties by singularizing the business and application logic while expanding device interfaces. Since common business and application logic is centralized, the maintenance and enhancement become much easier. Our multimodal web system, VWS, was designed and implemented based on this architecture to serve as a “proof of concept” example of this new e-commerce application paradigm. Nowadays, mobile and wireless technologies are becoming increasingly prevalent, and there is a growing demand for technology that will allow
users to connect to the Internet from anywhere through devices that are not suitable for the use of traditional keyboard, mouse, and monitor. In the near future, human-computer voice interfaces will become important tools for solving the accessibility limitations of conventional human-computer interfaces. Based on the multimodal architecture, this chapter describes how a voice-enabled web system (VWS) prototype could be implemented to provide an interactive voice channel using an Apache web server, a voice server, and a Java servlet engine. We also showed through our project that multimodal user interface pages could be generated by using technologies including: eXtensible Markup Language (XML), eXtensible Stylesheet Language for Transformations (XSLT) (Burke, 2001), VoiceXML (Larson, 2003), and Java technology (Sun Microsystems, 2004). As a matter of fact, it is also reconfirmed from our project that voice interfaces may not only help solve the accessibility limitations of conventional human-computer interfaces, but enable mobile device users’ hands to engage in some other activities without losing the ability to browse the Internet through voice commands. In terms of enhancing users’ experience and improving the overall system performance, a GA-based dynamic structure approach, which can restructure the site map according to users’ demand or the overall performance needs of a system, was applied to our VWS system. Our experiment results showed that this optimization approach may be adopted by an adaptive VWS Systems for supporting large-scale enterprise applications. To ameliorate the rate of convergence
727
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
of the optimization approach used by the adaptive VWS system, we plan to add some heuristic rules, such as ruin and recreate (Misevicius, 2003) and tabu search (Moscato & Cotta, 2003), to improve the simple genetic algorithm used in the VWS system, which will eventually have the ability of self-learning to optimize itself automatically, dynamically, and effectively. In their influential paper on the challenges associated with nomadic or pervasive computing, Lyytinen and Yoo (2002) outline eight research themes and twenty research questions, covering a wide range of topics in the heart land of information systems research. If we are to choose a research question posed by them that comes nearest to what we do here, it is their research question 1.1, namely: “How do we design and integrate sets of personalized mobile services that support users’ task execution in multiple social and physical contexts?” (Lyytinen & Yoo, 2002, p. 380). Our contribution lies in showing how VWS can be designed to provide an interactive voice channel using readily available information technology products, such as Apache web server, the voice server, and the servlet engine. Furthermore, we describe how multimodal user-interface pages for supporting various wireless devices have been implemented by using technologies including eXtensible Markup Language (XML), eXtensible Stylesheet Language for Transformations (XSLT), and Java technologies. The last but never the least, compared with the Web browser based interface the voice channel is slower, and therefore, there is a need to apply the optimization techniques, such as the GA-based algorithm described in this chapter, to enhance the responsiveness of the VWS based services and applications.
AcKNOWLEDGMENt The editorial efforts and the invaluable comments from the editor are highly appreciated. The author would also like to thank the National Science
728
Council, Taiwan, for financially supporting this work under contract number NSC-96-2221-E005-088-MY2.
rEFErENcEs Al-Khayyal, F. A., & Sherali, H. D. (2000). On finitely terminating branch-and-bound algorithms for some global optimization problems. SIAM Journal on Optimization, 10(4), 1049–1057. doi:10.1137/S105262349935178X Anerousis, N., & Panagos, E. (2002). Making voice knowledge pervasive. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 1(2), 42–48. doi:10.1109/ MPRV.2002.1012336 Ballagas, R., Borchers, J., Rohs, M., & Sheridan, J. G. (2006). The smart phone: A ubiquitous input device. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 5(1), 70–77. doi:10.1109/MPRV.2006.18 Burke, E. (2001). JAVA & XSLT. California: O’Reilly. Chang, S. E., Changchien, S. W., & Huang, R.-H. (2006). Assessing users’ product-specific knowledge for personalization in electronic commerce. Expert Systems with Applications, 30(4), 682–693. doi:10.1016/j.eswa.2005.07.021 Chang, S. E., & Minkin, B. (2006). The implementation of a secure and pervasive multimodal Web system architecture. Information and Software Technology, 48(6), 424–432. doi:10.1016/j. infsof.2005.12.012 Chang, Y.-F., & Chen, C. S. (2005). Smart phone - the choice of client platform for mobile commerce. Computer Standards & Interfaces, 27(4), 329–336. doi:10.1016/j.csi.2004.10.001
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
Changchien, S. W., Lee, C. F., & Hsu, Y. J. (2004). Online personalized sales promotion in electronic commerce. Expert Systems with Applications, 27(1), 35–52. doi:10.1016/j.eswa.2003.12.017
Kirkpatrick, S., Gelatt, C. D. Jr, & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598), 671–680. doi:10.1126/science.220.4598.671
de Grimaldo, S. W. (2004). Mobile Commerce Takes off. Telecom Trends International, Inc., Virginia. Retrieved November 15, 2007, from http://www.telecomtrends.net/reports.htm
Kohrs, A., & Merialdo, B. (2001). Creating useradapted Web sites by the use of collaborative filtering. Interacting with Computers, 13, 695–716. doi:10.1016/S0953-5438(01)00038-8
Dettmer, R. (2003). It’s good to talk (speech technology for online services access). IEE Review, 49, 30–33.
Larson, J. A. (2003). VoiceXML and the W3C speech interface framework. IEEE MultiMedia, 10, 91–93. doi:10.1109/MMUL.2003.1237554
Feng, J., Sears, A., & Karat, C.-M. (2006). A longitudinal evaluation of hands-free speech-based navigation during dictation. International Journal of Human-Computer Studies, 64(6), 553–569. doi:10.1016/j.ijhcs.2005.12.001
Li, Y., & Zhong, N. (2004). Web mining model and its applications for information gathering. Knowledge-Based Systems, 17(3), 207–217. doi:10.1016/j.knosys.2004.05.002
Goose, S., Newman, M., Schmidt, C., & Hue, L. (2000). Enhancing Web accessibility via the Vox Portal and a Web-hosted dynamic HTML VoxML converter. Computer Networks, 33, 583–592. doi:10.1016/S1389-1286(00)00036-0 Gulliver, S. R., Serif, T., & Ghinea, G. (2004). Pervasive and standalone computing: the perceptual effects of variable multimedia quality. International Journal of Human-Computer Studies, 60(5/6), 640–665. doi:10.1016/j.ijhcs.2003.11.002 Institute for Information Industry. (2007). Survey on the mobile Internet in Taiwan for Q3 2007. ACIFIND, focus on Internet news and data. Retrieved January 2, 2008, from http://www.find.org.tw/ find/home.aspx?page=many&id=184 Joachims, T., Freitag, D., & Mitchell, T. (1997). WebWatcher: A tour guide for the World Wide Web. In Proceedings of IJCAI-97, Fifteenth Joint Conference on Artificial Intelligence, Nagoya, Japan (pp. 770-775).
Lyytinen, K., & Yoo, Y. (2002). The next wave of nomadic computing. Information Systems Research, 13(4), 377–388. doi:10.1287/ isre.13.4.377.75 Market Wire. (2006). Voxify Reports Outstanding Growth, Increased Momentum in the Speech SelfService Marketplace. Retrieved August 10, 2006, from http://www.findarticles.com/p/articles/ mi_pwwi/is_200605/ai_n16136434 Misevicius, A. (2003). Genetic algorithm hybridized with ruin and recreate procedure: Application to the quadratic assignment problem. KnowledgeBased Systems, 16(5-6), 261–268. doi:10.1016/ S0950-7051(03)00027-3 Moscato, P., & Cotta, C. (2003). Gentle introduction to memetic algorithms. In F. Glover & G. Kochenberger (Eds.), Handbook of metaheuristics (pp. 105-144). Boston: Kluwer Academic Publishers. Perkowitz, M., & Etzioni, O. (2000). Towards adaptive Web sites: Conceptual framework and case study. Artificial Intelligence, 118, 245–275. doi:10.1016/S0004-3702(99)00098-3
729
A Voice-Enabled Pervasive Web System with Self-Optimization Capability
Rebman, C. M. Jr, Aiken, M. W., & Cegielski, C. G. (2003). Speech recognition in the human-computer interface. Information & Management, 40(6), 509–519. doi:10.1016/S0378-7206(02)00067-8 Rodriguez, A., Ho, W.-K., Kempny, G., Pedreschi, M., & Richards, N. (2002). IBM WebSphere Voice Server 2.0 Implementation Guide. IBM Redbooks, IBM. Roussos, G., Marsh, A. J., & Maglavera, S. (2005). Enabling pervasive computing with smart phones. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 4(2), 20–27. doi:10.1109/MPRV.2005.30 Smith, K. A., & Ng, A. (2003). Web page clustering using a self-organizing map of user navigation patterns. Decision Support Systems, 35, 245–256. doi:10.1016/S0167-9236(02)00109-4 Su, Z., Yang, Q., Zhang, H., Xu, X., Hu, Y.-H., & Ma, S. (2002). Corellation-based Web document clustering for adaptive Web interface design. Knowledge and Information Systems, 4, 151–167. doi:10.1007/s101150200002 Sun Microsystems. (2004). Information on J2ME and J2SE. Retrieved June 2, 2007, from http:// java.sun.com/j2me/ and http://java.sun.com/j2se/ Tan, A. (May 2005). Voice to dominate 3G traffic, says expert. ZDFNetAsia. Retrieved June 2, 2007, from http://www.zdnetasia.com/news/ communications/0,39044192,39231956,00.htm
Teo, T., & Pok, S. (2003). Adoption of WAPenabled mobile phones among Internet users. Omega: The International Journal of Management Science, 31, 483–498. doi:10.1016/j. omega.2003.08.005 The Kelsey Group (2001, March). The global voice ecosystem (Analyst Report), The Kelsey Group. Toye, E., Sharp, R., Madhavapeddy, A., & Scott, D. (2005). Using smart phones to access site-specific services. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 4(2), 60–66. doi:10.1109/MPRV.2005.44 Turner, K. (2004). Analysing interactive voice services. Computer Networks, 45(5), 665–685. doi:10.1016/j.comnet.2004.03.005 Wang, F. H., & Shao, H. M. (2004). Effective personalized recommendation based on timeframed navigation clustering and association mining. Expert Systems with Applications, 27(3), 365–377. doi:10.1016/j.eswa.2004.05.005 Yamazaki, Y., Iwamida, H., & Watanabe, K. (2004). Technologies for voice portal platform. Fujitsu Scientific and Technical Journal, 40(1), 179–186. Zhai, S., Kristensson, P.-O., & Smith, B. A. (2005). In search of effective text input interfaces for off the desktop computing. Interacting with Computers, 17(3), 229–250. doi:10.1016/j. intcom.2003.12.007
This work was previously published in Global Implications of Modern Enterprise Information Systems: Technologies and Applications, edited by Angappa Gunasekaran, pp. 137-155, copyright 2009 by Information Science Reference (an imprint of IGI Global).
730
731
Chapter 3.12
Achieving System and Business Interoperability by Semantic Web Services John Krogstie Norwegian University of Science and Technology (NTNU), Norway, & SINTEF ICT, Norway Csaba Veres Norwegian University of Science and Technology (NTNU), Norway Guttorm Sindre Norwegian University of Science and Technology (NTNU), Norway Øyvind Skytøen Norwegian University of Science and Technology (NTNU), Norway
AbstrAct
INtrODUctION
Much of the early focus in the area of Semantic Web has been on the development of representation languages for static conceptual information; while there has been less emphasis on how to make Semantic Web applications practically useful in the context of knowledge work. To achieve this, a better coupling is needed between ontology, service descriptions, and workflow modeling, including both traditional production workflow and interactive workflow techniques. This chapter reviews the basic technologies involved in this area to provide system and business interoperability, and outlines what can be achieved by merging them in the context of real-world workflow descriptions.
Information systems interoperability has become a critical success factor for process and quality improvement both in private enterprises and the public sector (Linthicum, 2003), and recent technological advances to achieve this include web services and semantics encoded in ontologies. “The Semantic Web” (Berners-Lee, Hendler & Lassila, 2001) is seen as the next generation of web systems, providing better information retrieval, better services, and enhanced interoperability between different information systems. The Semantic Web initiative is currently overseen in the semantic web activity of the W3C, and includes a number of core technologies. Some core technologies that will be relevant to this overview are XML, RDF, RDF/S, OWL, and Web Services (SOAP, WSDL,
DOI: 10.4018/978-1-60566-146-9.ch010
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Achieving System and Business Interoperability by Semantic Web Services
UDDI). Also newer initiatives such as OWL-S and WSMO are relevant to our work, and will be described in more detail in the article. While these technologies are promising, it can still be argued that alone, they are not sufficient to achieve interoperability in the business domain, allowing for a smooth integration between different information systems within and between organizations. For this to be accomplished, it is not enough to describe ontological metadata about the information and services available – one also needs to know the work context in which the different types of information and services are requested. As observed in (Bubenko, 2007) this is often a challenge, as many ontologists focus on domain ontologies as such, more than their potential usage in applications, as well as having limited knowledge of advances in other areas of conceptual modeling during the last decades. Hence there is a need to integrate ontologies and service descriptions with models of workflows and business processes. Most of the work within these areas focuses on automating routine tasks. While computerization automates routine procedures, knowledge-based cooperation remains a challenge, where we see a role for interactive process models. To the extent that different enterprises use different modeling languages, the interoperability between various models would also emerge as a challenge in its own respect, in which case some unification effort might be needed (Opdahl & Sindre, 2007), one effort in this direction is the Unified Enterprise Modeling Language (UEML)1, not to be confused with the UML. The purpose of this chapter is as follows: a)
b)
732
To provide an overview of the relevant technologies (ontology, service models, workflow models, including those being based on interactive models). To show how these technologies fit together, both in theory (presented as “The interoperability pyramid”) and in practice.
The rest of this chapter is structured as follows: The next three sections survey ontologies, service models, and workflow models, respectively. Then an integrated approach to enterprise and IS development is presented, where interoperability among the various systems (and enterprises) would be a major focus. Finally, the last section provides some concluding remarks.
bAsE tEcHNOLOGIEs AND ONtOLOGy We here briefly describe core technologies within the area, including XML, RDF, RDF Schema, and ontologies including an overview of OWL.
XML XML will receive the least coverage in this review. It is the most general and widespread of the technologies we consider, and is therefore likely to be familiar to the majority of readers. Basically, XML defines a set of syntax rules that can be used to create semantically rich markup languages for particular domains. Once a markup language is defined and the semantics of the tags known, the document content can be annotated. The XML language thus defined can include specification of formatting, semantics, document meta-data (author, title, etc.), and so on. XML allows for the creation of elements which are XML containers consisting of a start tag, content, and an end tag. Because of the flexibility of XML in defining domain specific, meaningful markups, it has been widely adapted as a standard for application independent data exchange. These properties combine to make XML the foundational technology for the semantic web, providing a common syntax for authoring web content. XML provides means for syntactic interoperability, as well as ways to ensure the validity of a document, and most importantly the necessary syntax to define the meaning of elements in a domain specific application. On
Achieving System and Business Interoperability by Semantic Web Services
the other hand providing the syntax for defining meaning is only a necessary, but not sufficient condition for the specification of semantics that allows interoperability Building on the XML specification also becomes necessary because the hierarchical structure of XML documents makes them difficult to use for extensible, distributed data definitions. Much of the information about relationships in the data is implicit in the structure of the document, making it difficult to use and update this information in a flexible and application independent way. This is where RDF comes into the picture.
2.
3.
4.
rDF The first level at which a concrete data model is defined on XML is the Resource Description Framework (RDF). Actually, RDF as a data model is independent of XML, but we consider it as a layer extending the XML because of the widely practiced XML serialization of RDF in semantic web applications (RDF/XML2). The basic structure of RDF is a triple consisting of two nodes and a connecting edge. These basic elements are all kinds of RDF resources, and can be variously described as (Manola & Miller, 2004), (Broekstra, Kampman, & van Harmelen, 2003), or (Powers, 2003). There are alternative serializations of RDF, including N33, N-Triples4, and Turtle5. Each of these professes some advantages, for example human readability, but RDF/ XML is the normative syntax for writing RDF. This relatively simple basic model has several features that make it a powerful data model for integrating data in dispersed locations (Butler 2002). 1.
RDF is based on triples, in contrast to simple attribute-value pairs. The advantage of using triples is that this makes the subject of the attribute value pair explicit.
5.
RDF distinguishes between resources and properties that are globally qualified, i.e., are associated with a URI, and those that are locally qualified. The advantage of a globally qualified resource or property is it can be distinguished from other resources or properties in different vocabularies that share the same fragment name, in a fashion that is analogous to XML namespaces. As a result of the first two properties, RDF can be used to make statements about Web resources, by relating one URI to another. It is easy to encode graphs using RDF as it is based on triples, whereas XML documents are trees so encoding graphs is more complicated and can be done in several different ways. RDF has an explicit interpretation or model theory; there is an explicit formal, application independent interpretation of an RDF model (Hayes, 2004). XML documents also have interpretations but they are often implicit in the processor or parser associated with that particular type of XML document.
But in spite of the apparent usefulness of RDF, there is relatively slow adoption of RDF compared with XML (Batzarov, 2004). There are many possible reasons for this slow adoption. (Daconta, Obrst, & Smith, 2003) take an optimistic position and attribute the long lead-in time to poor tutorials, minimal tool support, and poor demonstration applications, arguing that once the practical limitations have been overcome, adoption will grow rapidly. However, we must not ignore the presence of dissatisfaction with RDF in both practitioner and research communities. Some of the challenges for RDF in light of this dissatisfaction are as follows: 1.
RDF / XML (or XHTML) integration needs improvement. The W3C RDF Working Group is working on solutions for successfully embedding RDF within XHTML (RDF/
733
Achieving System and Business Interoperability by Semantic Web Services
2.
3.
734
A6), and tools such as SMORE7 purport to making HTML markup easier. But so far there are no high profile, compelling applications to showcase the advantages of RDF. For example microformats8, which can be seen as a very simple version of RDF/A but are “designed for humans first and machines second”, have enjoyed a rapid uptake. For example, both Yahoo! and Google can run specialized searches on microformats. The RDF data model can be complex and confusing because it mixes metaphors and introduces new concepts that can be tricky to model. For instance the standard notion of RDF as composed of subject-predicateobject is linguistically derived, but its relationship to concepts in other representations is somewhat unclear, e.g., class-propertyvalue (object-oriented), node-edge-node (graph theory), source-link-destination (web link), entity-relation-entity (database), and can cause confusion. One of the particularly tricky constructs is reification, which introduces an unproven modeling construct that is foreign to most data modeling communities. Reification can cause confusion because it can be used to arbitrarily nest statements, possibly negating the stated truth value of statements (Daconta, Obrst, & Smith, 2003). The RDF/XML serialization is confusing and difficult to work with, especially in the absence of proper tool support. The striped syntax (Brickley, 2001) can make it difficult to understand the proper interpretation of statements. For instance it is often impossible to tell whether an XML element in the RDF serialization represents an edge, or a node. The complexity of the syntax is partially responsible for a relative support of the RSS1.09 specification. RSS1.0 is an RDF based variant of the popular RSS format, and is probably the most high profile use of RDF on the Internet. However, it is losing ground in terms of popularity to the
non-RDF based, and syntactically much simpler RSS 2.010. Clearly there is a great deal of work to be done in establishing RDF as a core technology that adds value to the widely adopted XML syntax alone. There are some fledgling ventures launched in 2007, backed by high profile investors, which attempt to bring the advantages of RDF to mainstream social networking applications11. Should they become successful, then RDF will become more prominent in the public eye. But RDF is also important as a foundation layer for Ontologies, making it relatively simple to express higher level ontological constructs. Implementing ontologies in XML and XML Schema without RDF is tricky for several reasons. In describing a procedure for translating an ontology into an XML Schema, (Klein et al., 2003) note several important problems. First, superclass/ subclass inheritance is problematic and has to be overcome with artificial workarounds in the XML specification, and defining multiple inheritance is not possible at all in XML/S. Second, the possibility of fully automating the translation process is questionable, limiting its use for large ontologies. In order to use RDF as a means of representing knowledge it is necessary to enrich the language in ways that fixes the interpretation of parts of the language. As described thus far, RDF does not impose any interpretation on the kinds of resources involved in a statement beyond the roles of subject, predicate and object. It has no way of imposing some sort of agreed meaning on the roles, or the relationships between them. The RDF schema is a way of imposing a simple ontology on the RDF framework by introducing a system of simple types.
rDF schema We have seen that RDF provides a means to relate resources to one another in a graph based formalism connecting subjects to objects via
Achieving System and Business Interoperability by Semantic Web Services
predicates. The RDF schema, (RDF/S) provides modeling primitives that can be used to capture basic semantics in a domain neutral way. That is, RDF/S specifies metadata that is applicable to the entities and their properties in all domains. The metadata then serves as a standard model by which RDF tools can operate on specific domain models, since the RDF/S meta-model elements will have a fixed semantics in all domain models. The RDF/S elements are shown in Table 1 and Table 2. RDF/S provides simple but powerful modeling primitives for structuring domain knowledge into classes and sub-classes, properties and sub- properties, and can impose restrictions on the domain and range of properties, and defines the semantics of containers. The simple meta-modeling elements can limit the expressiveness of RDF/S. Some of the main
limiting deficiencies are identified in (Antoniou & van Harmelen, 2004): •
• •
•
Local scope of properties: in RDF/S it is possible to define a range on properties, but not so they apply to some classes only. For instance the property eats can have a range restriction of food that applies to all classes in the domain of the property, but it is not possible to restrict the range to plants for some classes and meat for others. Disjointness of classes cannot be defined in RDF/S. Boolean combinations of classes are not possible. For example Person cannot be defined as the union of the classes Male and Female. Cardinality restrictions cannot be expressed.
Table 1. RDF/S classes Class name
comment
rdfs:Resource
The class resource, everything.
rdfs:Literal
The class of literal values, e.g. textual strings and integers.
rdfs:Class
The class of classes.
rdfs:Datatype
The class of RDF datatypes.
rdfs:Container
The class of RDF containers.
rdfs:ContainerMembershipProperty
The class of container membership properties, rdf:_1, rdf:_2, ..., all of which are subproperties of ‘member’.
Table 2. RDF/S properties Property name
comment
rdfs:subClassOf
The subject is a subclass of a class.
rdfs:Class
domain rdfs:Class
range
rdfs:subPropertyOf
The subject is a subproperty of a property.
rdf:Property
rdf:Property
rdfs:domain
A domain of the subject property.
rdf:Property
rdfs:Class
rdfs:range
A range of the subject property.
rdf:Property
rdfs:Class
rdfs:label
A human-readable name for the subject.
rdfs:Resource
rdfs:Literal
rdfs:comment
A description of the subject resource.
rdfs:Resource
rdfs:Literal
rdfs:member
A member of the subject container.
rdfs:Resource
rdfs:Resource
rdfs:seeAlso
Further information about the subject resource.
rdfs:Resource
rdfs:Resource
rdfs:isDefinedBy
The definition of the subject resource.
rdfs:Resource
rdfs:Resource
735
Achieving System and Business Interoperability by Semantic Web Services
•
Special characteristics of properties like transitivity cannot be expressed.
Ontologies A good starting point for understanding what ontology entails, is to consider Figure 1, adopted from (Daconta, Obrst, & Smith,2003), which places a number of knowledge models on a continuum. As you go from the lower left corner to the upper right, the richness of the expressible semantics increases. This is shown on the right side of the arrow with some typical expressions that have some sort of defined semantics for the particular model. The names for the knowledge models are given on the left of the arrow. It is important to note that all of the terms on the left hand side have been called “ontology” by at least some authors, which is part of the source for confusion about the word. Models based on the various points along the ontology spectrum have different uses (McGuinness, 2003). In the simplest case, a group of users can agree to use a controlled vocabulary for their domain. This of course does not guarantee that they will use the terms in the same way all the time, but if all the users including database designers chose their terms from an accepted set, then
Figure 1. The ontology spectrum
736
the chances of mutual understanding are greatly enhanced. Perhaps the most publicly visible use for simple ontologies is the taxonomies used for site organization on the World Wide Web. This allows designers to structure information and users to browse and search. Taxonomies can also help with sense disambiguation since the context of a term is given by the more general terms in the taxonomy. Structured ontologies provide more sophisticated usage scenarios. For instance, they can provide simple consistency and completeness checks. If all products must have a price then web sites can automatically be checked for missing or conflicting information. Such ontologies can also provide completion where partially specified information can be expanded automatically by reference to the terms in the ontology. This expanded information could also be used for refining search, for instance. Ontologies can also facilitate interoperability, by aligning different terms that might be used in different applications (McGuinness, 2003). Now we are in a position to see why the ontologies on the most formal end of the spectrum are often taken as the default interpretation in the context of the semantic web, providing the conceptual underpinning for “ ... making the
Achieving System and Business Interoperability by Semantic Web Services
semantics of metadata machine interpretable” (Staab & Stuber, 2004). But for the semantics of a domain model to be machine interpretable in any interesting way, it must be in a format that allows automated reasoning in a flexible manner. Obviously, taxonomies can specify little in this sense. Database schemas are more powerful, but limit the interpretation to a single model in terms of reasoning over the knowledge base and the only automated reasoning that can be performed is what is allowed by the relational model, i.e., retrieval of tuples actually represented in the database. Formal logic based reasoning about ontologies can consider multiple possible models (Bordiga & Brachman, 2003). They are at the same time more formally constrained and more semantically flexible than database schemas. Ontologies based on different logics can support different kinds of inference, but a minimal set of services should include reasoning about class membership, class equivalence, consistency, and classification (Antoniou & van Harmelen, 2004). The ontology representation language adopted by the Web Ontology Working Group of the W3C12 is the Web Ontology Language (OWL). OWL is a response to a number of requirements (Smith, Welty, & McGuinness, 2004) including the need for a language with formal semantics that enables automated reasoning, and to address the inherent limitations of RDF/S as described above.
OWL According to the original design goal, OWL was to be a straightforward extension of RDF/S, guaranteeing downward compatibility such that an OWL aware processor could also understand RDF/S documents without modification. Unfortunately this did not succeed because the generality of some RDF/S elements (e.g. the semantics of class as “the class of all classes”) does not make RDF/S expressions tractable in the general case. In order to maintain computational tractability, OWL processors include restrictions that prevent
the interpretation of some RDF/S expressions. The OWL specification defines three sublanguages: OWL Full, OWL DL, and OWL Lite: OWL Full is upward and downward compatible with RDF but OWL DL and OWL Lite are not. The names of the three sub languages of OWL describe their expressiveness, keeping in mind a fundamental tradeoff between expressiveness, efficiency of reasoning, and support for human understanding. OWL Full has constructs that make the language undecidable. Developers should therefore only use OWL Full if the other two sub languages are inadequate for modeling the relevant domain, or if they wish to maintain full compatibility with RDF. Similarly, OWL DL should be used if OWL Lite is not sufficient. Details of the syntax and semantics can easily be obtained from the technical documentation web site of the W3C.
WEb sErVIcEs There is a great deal of interest about web services (Alonso et al., 2004) and service oriented architectures in general. A useful definition can be found in (Daconta, Obrst, & Smith, 2003): “Web services are software applications that can be discovered, described, and accessed based on XML and standard Web protocols over intranets, extranets, and the Internet.” This definition exposes the main technical aspects of web services, to do with discovery and description, as well as the role of WWW (e.g. XML) technologies for data exchange and communication. Also the definition is abstract enough to exclude low level protocols like RPC as web services. These core concepts along with the associated technologies are shown in Figure 2 below. It is important to situate the role of Web services in the real world. (Daconta, Obrst, & Smith, 2003) argue that the most important factor for determining the future of a new technology is not “... how well it works or how “cool” it is ...” but
737
Achieving System and Business Interoperability by Semantic Web Services
Figure 2. The basic layers of Web services
guage keyword matching, 2) ontology based keyword matching (increasing precision through a controlled vocabulary), and 3) semantic matchmaking, based on precise semantic descriptions of services and service needs. Currently, service descriptions in UDDI, for example, are primarily text descriptions with no semantic markup, requiring a lot of manual input, and not facilitating the more advanced approaches to discovery.
service composition on business adoption. Along this line they see a bright future for Web services which is being promoted by Microsoft, IBM, Sun, as well as the open source community. But why such widespread support? One reason is the promise of interoperable systems. Once businesses adopt standardized web service descriptions, the possibility of exchanging data and sharing the cost of services increases. In addition, the open standards prevent monopolization of applications, preventing the dreaded “vendor lock-in” associated with proprietary solutions. Finally, a widespread adoption of Web service protocols means that existing applications can be leveraged by turning them into Web services. As an example, it is possible for .NET clients and servers to talk to J2EE servers using SOAP. The point of all this is that Web services enable interoperability at the level of business processes without having to worry about interoperating between different applications, data formats, communication protocols, and so on. We will see later in the article that this influences the way workflows and knowledge based work processes are modeled and instantiated in particular work environments. As describe above web services must be discovered, described, and appropriately connected in an implementation independent way. (Berardi et al., 2005) outline 3 different approaches for web service discovery, on a trade-off between ease of provision and accuracy: 1) natural lan-
738
As for service composition (Berardi et al., 2005) distinguishes between synthesis, building the specification of the composite service from its subservices, and orchestration, which is the run-time management of the composite service (scheduling, invoking sub-services, etc.). Synthesis can be done either manually or automatically, the latter requiring that services have been specified formally. The orchestration problem for web services has a lot in common with similar issues in workflow management, which will be discussed in the next section. (Dijkman & Dumas, 2004) identify four different viewpoints from which the control-flow aspects of web services can be described, distinguishing between choreography, which is a collaboration between service providers and user to achieve a certain goal, and orchestration, which is what a service provider performs internally to realize a service it provides. (The other two viewpoints are behavior interface and provider interface). There are two ongoing standardization efforts related to service composition (Barros, Dumas & Oaks, 2005), the Web Service Business Process Execution Language (WS-BPEL), formerly known as BPEL4WS, and the Web Service Choreography Description Language (WS-CDL). WS-BPEL (Arkin et al., 2005) is meant to specify both abstract and executable business processes, and the language contains one section of core concepts (needed for both kinds of specifications) as well as sections with extensions for executable processes and abstract processes (a.k.a. business
Achieving System and Business Interoperability by Semantic Web Services
protocols), respectively. The main viewpoint taken in WS-BPEL is that of orchestration, requiring centralized control of the business process. WSCDL (Cavantzas et al., 2005), takes the alternative viewpoint of choreography, meaning that this language is better suited for describing interplay between several independent parties in a shared control domain. There are several proposed frameworks to facilitate automatic support for describing, finding and composing web services: METEOR-S13, Web Service Modeling Ontology (WSMO)14, Internet Reasoning Service (IRS)15, and OWL-S16.
MEtEOr-s The METEOR-S project is run at the LSDIS Lab at the University of Georgia, as a successor of the METEOR project, whose focus was on workflow management in a more traditional transactionoriented perspective. METEOR-S take a more semantic and dynamic perspective on workflow management. The METEOR-S architecture consists of three main components: the process designer, the configuration module, and the execution environment. The process designer module supports the design of abstract work processes represented inWS-BPEL. The Jena toolkit is used for building and processing ontologies. The process configuration module is responsible for dynamically finding and binding services for the defined processes. The METEOR-S Web Service Annotation Framework (MWSAF)17 tool is used for semi-automatically annotating web services with semantics using the WSDL-S18 language. The execution environment consists of a logical layer over a web process execution engine. The execution engine uses proxies for each virtual partner of the process. To support run-time and deployment-time binding the configuration module can change the service bound to the proxies. At the time of writing (December 2007) the METEOR-S framework is in version 0.8, with a couple of finished tools. Currently, the MWSAF
tool seems to be the only one publicly available for download.
WsMO The Web Service Modeling Ontology (WSMO) is a prject undertaken by the WSMO Working group under the SDK19 project cluster (EU). WSMO consists of three main components: a modeling framework of core elements for semantic web services, a formal description language (Web Service Modeling Language - WSML) and an execution environment (WSMX). The WSMO core elements are 1.
2. 3.
4.
Ontologies – provide the formally specified terminology of the information used by all other components Goals – objectives that a client wants to achieve by using Web Services Web Services – semantic description of web services including functional capability and usage interface Mediators – connectors between components with mediation facilities for handling heterogeneities
Each of these elements is further described by non-functional properties including the Dublin Core Metadata Set, versioning information, quality of service information, and other relevant annotations. Together these components are able to define the terminology of the domain and how it relates to applications, and to describe the service in terms of its pre-conditions, post-conditions, effects, and mediators required during the discovery and execution of the service. Several tools related to WSMO are publicly available, the most important ones being the Web Service Execution Environment (WSMX), the Web Service Modeling Toolkit (WSMT), and the WSML Validator.
739
Achieving System and Business Interoperability by Semantic Web Services
Internet reasoning service The Internet Reasoning Service (IRS) is an ongoing project at the Knowledge Media Institute at the Open University. IRS has many similarities with WSMO, as it actually uses the WSMO’s ontology as a basis, but then providing several extensions. In particular, the concepts of goal and web service have been extended in IRS relative to the original WSMO definition. The latest implementation is IRS-III, supporting the following activities for building semantic web services (Cabral et al., 2006): • • •
in the language OCML (Operational Conceptual Modeling Language)21.
OWL-s OWL-S is a W3C initiative to provide an ontology and language to describe web services. It is less revolutionary than WSMO or IRS, as is evidenced by its closer ties to current standards like WSDL and UDDI. Its primary role is to assist discovery, which it fulfils by specifying three key components of a service as parts of its Upper Ontology of Services:
Use of domain ontologies Description of client requests as goals Semantic description of deployed web services Resolution of conceptual mismatches Publication and invocation of the described web services
•
The support for these activities is achieved by the IRS-III server, having the following components:
•
• •
• • •
•
The SWS library, where the semantic descriptions of web services are stored. Interpreters for choreography and orchestration, respectively The Mediation Handler, supporting brokering in the process of selecting, composing and invoking web services. The Invoker, which communicates with the service publishing platform, sends input from the client to the invoked services, and returns the results back to the client.
The IRS server is written in LISP and is available as an executable file. The publishing platforms for web services are available as Java web applications. Also available is the WebOnto20 tool for visualizing and editing IRS-III ontologies defined
740
•
What does the service provide for prospective clients? The answer to this question is given in the Service Profile which is used to advertise the service. How is it used? The answer to this question is given in the “process model.” This perspective is captured by the ServiceModel class. How does one interact with it? The answer to this question is given in the “grounding.” A grounding provides the needed details about transport protocols.
Thus each service presents a Service Profile (what it does), is described by a Service Model (how it works), and supports a Service Grounding (how to access it). Available implementations for OWL-S include OWL files for the Upper Ontology of Services and other relevant ontologies used by this Upper Ontology. Moreover, a set of relevant tools have been released, mostly by third parties, for instance: • • • •
The OWL-S Protégé-based editor22 Another OWL-S Editor23 developed at the University of Malta ASSAM Web Service Annotator (Hess et al., 2004) Semantic Web Service Composer24
Achieving System and Business Interoperability by Semantic Web Services
•
OWL-S matcher25, to assess the degree of correspondence between different service descriptions
another. In the OWL-S vision this is a step which can detract from the primary purpose of discovery. To be sure the translation problems still need to be solved, but OWL-S assumes this will be possible through some form of composition (Ankolekar et al., 2004). But this has some implications for the use of each system in a specific context, such as the system described in subsequent sections. The comparison of the four discussed frameworks is summed up in Table 3, concerning the available support for various activities related to the development and running of applications based on semantic web services. As can be seen from this comparison, OWL-S is the framework which is most different from the others, in the sense that it specifies less of the surrounding architecture for service and application development. While the other three include some kind of supporting tools for all the 7 tasks indicated in the left column of the table, OWL-S has not provided any specific support for publishing, composition, invocation and deployment.
comparison of the Frameworks While OWL-S is a less comprehensive approach, there are certain similarities between this and the WSMO-based approaches. •
•
•
OWL-S Service Profile ≈ WSMO capability + goal + non-functional properties. WSMO separates provider (capabilities) and requester points of view (goals) while OWL-S Profiles combine existing capabilities (advertisements) and desired capabilities (requests) OWL-S process model ≈ WSMO Service Interfaces. The process model in the OWL-S ServiceModel roughly corresponds to the interfaces in the WSMO Web Services descriptions of WSMO. OWL-S Grounding ≈ WSMO Grounding. Both provide a mapping to WSDL.
WOrKFLOW AND ENtErPrIsE PrOcEss MODELING
Nevertheless, clear differences exist in the overall architecture as well as the reliance of WSMO on explicitly defined mediators. A key objective of the WSMO is to define a taxonomy of mediators to translate between message produced by one Web service and those expected by
The unprecedented flexibility of web services provides a particular challenge for how to integrate their use in enterprise work practices. On the one hand, demand based service provision promises
Table 3. Comparison of the four frameworks SWS Activity
METEOR-S
WSMO
IRS-III
OWL-S
Publishing
Process Designer
WSMO Editor / Service Repository
Publishing Client / Handler
Not detailed
Discovery
Config. Module
Matchmaker
Mediation Handler
Matcher
Composition
Config. Module
Matchmaker
Mediation Handler
Not detailed
Selection
Config. Module
Selector
Mediation Handler
Matcher
Invocation
Execution Engine
Communication Manager
Invocation Handler / Publishing Platf.
Not detailed
Deployment
MWSAF
Matchmaker
Publishing Platf.
Not detailed
Ontology management
MWSAF
WSMO Editor
Mediation Handler
OWL-S Editors
741
Achieving System and Business Interoperability by Semantic Web Services
to be a blessing for facilitating problem solving; on the other hand, the instance based variability provided through the relatively free range of solutions offered in service composition could result in a serious challenge to established workflow modeling paradigms. Process modeling implicates a family of techniques used to document and explicate a set of business and work processes in a way that allows their analysis at various levels, and for various purposes. Our specific interest here is to use workflow modeling to analyze the work contexts that are likely to be involved in the day to day activities of an enterprise, with the aim of improving the timely delivery of appropriate information related resources and services. The purpose is to integrate workflow modeling with the potential of web services, to capture the likely usage scenarios under which the services will need to operate and to model this integrated use. The aim is that the model of work practices will allow better specification of actual information needs, which will in turn allow for richer requirements for the service descriptions expected from a web service, which will facilitate service composition and interoperability. The research problems therefore complement one another: workflow modeling helps web service design, but the availability of these services in turn improves workflow modeling techniques. The challenge for us is to construct modeling approaches that maintain sufficient expressive power for the required points of view as well as to allow flexibility for changing situations. (Jørgensen, 2004; Krogstie & Jørgensen, 2004) argue that static workflow models cannot handle the changing demands of real world situations, and that adaptive models, while providing greater flexibility, still cannot adequately handle the instance based user driven modifications needed in many situations. They argue for interactive models that can be dynamically configured by users.
742
Workflow and Process Modeling Languages Workflow modeling has been used to learn about, guide and support practice in a number of different areas including software process improvement (Bandinelli et al., 1995; Derniame, Kaba & Wastell, 1998), enterprise modeling (Fox & Grüninger, 2000), process centric software engineering (Ambriola, Conradi & Fuggetta 1997), and workflow systems (Fischer, 2001). The process modeling languages employed in these areas have been usefully categorized into one of the following types: transformational, conversational, role-oriented, constraint-based, and systemic (Carlsen, 1997). A summary of each type is given in (Jørgensen, 2004) where they are considered for their suitability as interactive modeling paradigms. Transformational languages represent the majority of process modeling languages in use, adopting an input-process-output approach. Some well known languages adopting this approach are Data Flow Diagrams (DFD), IDEF-0, Activity diagrams, BPMN, Event-driven Process Chains (EPC), and Petri nets. While there are clear differences between the formalisms, it is possible to generalize in terms of their basic expressive commitments and therefore suitability for modeling dynamic, flexible workflows (Conradi & Jaccheri, 1998; Curtis, Kellner & Over, 1992; Green & Rosemann, 2000; Lei & Singh, 1997). The standards defined by the Workflow Management Coalition (Fischer, 2001), the Internet Engineering Task Force (IETF) (Bolcer & Kaiser, 1999), and the Object Management Group (OMG, 2000) are all predicated on a common perspective. We consider a few languages from this perspective. The WfMC standards for process definition interchange between systems (WfMC, 1999) include a large portion of the primitives involved in transformational languages. Processes are modeled with hierarchical decomposition, control flow structures for sequences, iteration, AND and XOR branching. Activities can be associated with
Achieving System and Business Interoperability by Semantic Web Services
organizational roles and actors, tools and applications. The core terminology of the WfMC is found in (Fischer, 2001). Importantly, there is a distinction between process definition (idealized process) and instance (actual work). From past experience with developing flexible groupware and workflow systems (Carlsen, 1998; Jørgensen, 2001, 2003; Jørgensen & Carlsen, 1999; Natvig & Ohren, 1999), we have defined an interactive models approach to flexible information systems (Jørgensen, 2004). Models are normally defined as explicit representations of some portions of reality as perceived by some actor (Wegner & Goldin, 1999). A model is active if it directly influences the reality it reflects. Model activation involves actors interpreting the model and adjusting their behavior accordingly. This process can be • • •
Automated, where a software component interprets the model, Manual, where the model guides the actions of human actors, or Interactive, where prescribed aspects of the model are automatically interpreted and ambiguous parts are left to the users to resolve.
We define a model to be interactive if it is interactively activated. By updating such a model, users can adapt the system to fit their local plans, preferences and terminology. The Business Process Modeling Language (BPML) (Arkin, 2002) defines a web service interface description language, which presents obvious promise concerning the present requirements. BPML emphasizes low-level execution and contains several control flow primitives for loops (foreach, while, until), branching (manual choice or rule based switch, join), decomposition (all, sequential, choice), instantiation (call, spawn), properties (assign), tools (action), exceptions (fault), and transactions (compensate). The ability to define manual as well as rule based branching
is promising for use in flexible systems. Unfortunately the promise is only partially realized since different primitives are used for the two cases, implying that the automation boundary must be defined during process design. Additionally, BPML has weak support for local change and unforeseen exceptions. A visual notation, for BPML, BPMN has been developed and is getting increasing support both among researchers and in industrial practice (BPMI.org and OMG, 2006). The further standardization of BPMN is taken over by OMG, which have also standardized The Business Process Definition Metamodel (BPDM) which applies MDA principles to provide a consistent end-to-end approach for business process modeling. The development of BPMN was based on the revision of other notations, including UML, IDEF, ebXML, RosettaNet, LOVeM and EPCs, and stemmed from the demand for a graphical language that complements the BPEL standard for executable business processes. Although this gives BPMN a technical focus, it has been the intention of the BPMN designers to develop a modeling language that can be applied for typical business modeling activities as well. The complete BPMN specification defines thirty-eight distinct language constructs plus attributes, grouped into four basic categories of elements, viz., Flow Objects, Connecting Objects, Swimlanes and Artefacts. Flow Objects, such as events, activities and gateways, are the most basic elements used to create Business Process Diagrams (BPDs). Connecting Objects are used to interconnect Flow Objects through different types of arrows. Swimlanes are used to group activities into separate categories for different functional capabilities or responsibilities (e.g., different roles or organizational departments). Finally, Artefacts may be added to a diagram where deemed appropriate in order to display further related information such as processed data or other comments. BPDM on its side acknowledges that business process definitions are frequently used for
743
Achieving System and Business Interoperability by Semantic Web Services
purposes that do not required automation (for example, simulation and optimization of manual processes). In cases where a business process is to be (partially) automated, the BPDM enables sufficient detail to be added to a process definition to completely specify the process to the level of detail that is required to generate executable runtime artifacts (e.g. by providing a mapping from BPMN to BPEL). There is some recognition for the need to separate design components from run-time components for increased flexibility. This is realized in the WfMC’s XML Process Definition Language (WfMC, 2002). But even here, the separation is focused mainly for facilitating the reuse of design components across different workflow engines and design tools. There is little support for user driven interaction at run-time. It appears that current approaches are not designed with the flexibility required to accommodate the adaptive workflows that are enabled by Web Services technologies. One approach that be recon is worth pursuing is the use of interactive models, which we will look on in more detail below.
INtEGrAtING ENtErPrIsE AND Is DEVELOPMENt AND INtErOPErAbILIty Different approaches to model-driven development are appropriate for supporting different types of processes, from very static, to very dynamic, even emergent processes. The different process types decide the extent to which the underlying technology can be based on hardcoded, predefined, evolving or implicit process models. This gives a number of development approaches as illustrated in Figure 3: on one extreme; systems are manually coded on top of a traditional runtime environment, and on the other enterprise models are used directly to generate process-support solutions. In between these, we have the approaches typically described in OMGs Model-Driven Architecture approach (MDA), namely the development of Platform Independent Models (PIMs) for code-generation (e.g. on top of a UML Virtual Machine, denoted PIM EE in the figure), or for Platform Specific Models (PSMs) for more traditional code-generation or manual implementation. In Figure 4, we outline the different types of interoperability-possibilities between these types of development. Whereas traditional systems use special APIs and approaches such as EDI for interchange of data, on the next level (PSM), we
Figure 3. Overview of different execution environment for different process models
744
Achieving System and Business Interoperability by Semantic Web Services
Figure 4. Interoperability between different platforms
can identify the standard Web Services Interfaces (WSI). Above this level, there is a lot of work being performed on specific business process execution platform, with a possibility to exchange directly using a BPI (Business Process Interface). Finally, projects such as EXTERNAL26, MAPPER27 and ATHENA28 have provided solutions for how to interoperate on the enterprise model level, potentially supporting interoperability across models developed using different modeling languages and different tools (Krogstie, 2007). The EXTERNAL project developed an infrastructure to support networked organizations, defined in three basic layers: the information and communication technology (ICT) layer (1); the knowledge representation layer (2); the work performance and management layer (3). The “business end” of the infrastructure is layer 3 with support for modeling and implementing customer solutions, and generating work environments as personalized and context-sensitive user interfaces available through portals. The task performers may access desktop tools, organizational information systems, web services, or automated processes through this user environment.
User environments are generated dynamically based on the definition of tasks using EEML (Extended Enterprise Modeling Language, (Krogstie, 2008). Forms and components for interacting with different model objects are selected and composed based on generic user interface policies and on the personal and role-oriented preferences of the users. The dynamically generated work management interface includes services for work performance, but also for process modeling and meta-modeling. The model-generated workplace (MGWP) is the main component in this interface. In addition to the services for performing and managing the task, it contains links to all knowledge in the process models that is relevant for the task. Since the MGWP is dynamically generated, subject to personal preferences, the skill levels of task performers can be taken into account. Similarly, customized MGWPs for project management can support the project management team. The contents may include an overview of the project, adopted management principles, applicable methodologies, project work-break-down structure, results, plans and tasks, technologies and resources, status reporting and calculations (see Jørgensen, 2004; Krogstie & Jørgensen, 2004).
745
Achieving System and Business Interoperability by Semantic Web Services
The full power of the MGWP is enabled by the first two layers, which enable interoperability between applications, services, and data in an organizational context. Layer 2 defines how models and meta-models are represented, used and managed and layer 1 defines the execution platform, software architectures, tools, software components, connectivity and communication. The key to the integrated functionality of the infrastructure is the consistency and interoperability of models and service descriptions at all relevant levels. Standards and ontologies can be used across all levels, and also between levels to make the interoperation happen smoothly. In addition, the interactive nature of the models, meaning that the users are free to refine them during execution, increases their potential as sources of experience and knowledge. As such they document details on how the work was actually done, not only how it was once planned. Clearly the complexities of such a rich framework must be managed, and this is precisely the role for the fundamental technologies reviewed in this paper. From a unified data interchange format (XML) to a common data model (RDF), and shared conceptualizations (using OWL), it is possible to define services, and the relationships between services and the tasks they are supposed to support, in a transparent and reproducible way. Moreover, the availability of web services can be driven by requirements as documented in real world workflow models. Vendors could independently implement solutions with a guarantee that they will integrate with some existing application framework.
service-oriented applications, as well as workflow models including both automated and interactive tasks. The overview of interoperability between different platforms, together with the example explanations of this, illustrates how the combination of these technologies can provide more advanced interoperability than with current systems. We suggest that “interoperability” in the abstract may be an untenable goal, at least in the immediate future. But interoperability in the context of dynamic and interactive workflows, as the next best thing, is very much within our reach. Important future work on our approach is as indicated above related to integrating the results from the current work on semantic web services, in addition to operationalizing the semantic annotation approach.
cONcLUsION AND FUrtHEr WOrK
rEFErENcEs
This paper has provided a survey of relevant technologies for achieving semantic interoperability in the context of enterprise information systems, namely ontologies, service descriptions, and tool support frameworks for developing and executing
Alonso, G., Casati, F., Kuno, H., & Machiraju, V. (2004). Web Services: Concepts, Architecture and Applications. Berlin: Springer.
746
AcKNOWLEDGMENt Csaba Veres was funded by the Norwegian Research Council through the WISEMOD project while much of the work behind this paper was carried out. John Krogstie has in addition been funded by the projects ATHENA (http://www.athena-ip. org) and the Norwegian Research Council-project MONESA. The ideas of this paper were pursued in the context of the EU NoE project INTEROP and its continuation INTEROP-VLab, and we thank other partners of INTEROP for valuable inspiration. The paper does not represent the view of the funding organizations or project consortia and the authors are solely responsible for the paper’s content.
Achieving System and Business Interoperability by Semantic Web Services
Ambriola, V., Conradi, R., & Fuggetta, A. (1997). Assessing Process-Centered Software Engineering Environments. ACM Transactions on Software Engineering and Methodology, 6(3), 283–328. doi:10.1145/258077.258080 Ankolekar, A., Martin, D., McGuinness, D., McIlraith, S., Paolucci, M., & Parsia, B. (2004). OWL-S’Relationship to Selected Other Technologies, Technical report, W3C Member Submission 22 November 2004. Retrieved 1 Feb, 2006, from http://www.w3.org/Submission/OWL-S-related/ Antoniou, G., & van Harmelen, F. (2004). Web Ontology Language: OWL. In S. Staab & R. Studer (Eds.), Handbook on Ontologies (pp. 6792). Berlin: Springer. Arkin, A. (2002). Business Process Modelling Language. Retrieved 23 Aug, 2003, from http://www. bpmi.org/bpmi-downloads/BPML-SPEC-1.0.zip Arkin, A., Askary, S., Bloch, B., Curbera, F., Goland, Y., Kartha, N., et al. (Eds.). (2005) Web Services Business Process Execution Language Version 2.0, Technical report, OASIS Open, Inc., Committee Draft, 21 Dec, 2005. Retrieved 15 Feb 2006 from http://www.oasis-open.org/committees/download.php/16024/wsbpel-specificationdraft-Dec-22-2005.htm Bandinelli, S., Fuggetta, A., Lavazza, L., Loi, M., & Picco, G. (1995). Modelling and Improving an Industrial Software Process. IEEE Transactions on Software Engineering, 21(5), 440–454. doi:10.1109/32.387473 Barros, A., Dumas, M., & Oaks, P. (2005). A Critical Overview of the Web Services Choreography Description Language. BPTrends (www.bptrends. com), March 2005, pp 1-24. Batzarov, Z. (2004). Orbis Latinus: Linguistic Terms. Retrieved 3 Apr, 2005, from http://www.orbilat.com/General_References/Linguistic_Terms. html
Berardi, D., Cabral, L., Cimpian, E., Domingue, J., Mecella, M., Stollberg, M., & Sycara, K. (2005). ESWC Semantic Web Services Tutorial. Retrieved 15 Feb, 2006, from http://stadium.open.ac.uk/dip/ Berners-Lee, T., Hendler, J., & Lassila, O. (2001). The Semantic Web. Scientific American, 284(5), 34–43. Bolcer, G. A., & Kaiser, G. (1999). SWAP: Leveraging the Web To Manage Workflow. IEEE Internet Computing, 3(1), 85–88. doi:10.1109/4236.747328 Borgida, A., & Brachman, R. (2003). Conceptual Modeling with Description Logics. In F. Baader, D. Calvanese, D. McGuinness, D. Nardi, & P. Patel-Schneider (Eds.) The Description Logic Handbook: Theory, Implementation and Applications. Cambridge University Press. BPMI.org and OMG. (2006). Business Process Modeling Notation Specification. Final Adopted Specification. Object Management Group, http:// www.bpmn.org (February 20, 2006). Brickley, D. (2001). RDF: Understanding the Striped RDF/XML Syntax. Retrieved 25 Sep, 2002, from http://www.w3.org/2001/10/stripes/. Broekstra, J., Kampman, A., & van Harmelen, F. (2003). Sesame: An Architecture for Storin gand Querying RDF Data and Schema Information. In D. Fensel, J. A. Hendler, H. Lieberman & W. Wahlster (Eds.), Spinning the Semantic Web: Bringing the World Wide Web to Its Full Potential [outcome of a Dagstuhl seminar] (pp. 197-222). Cambridge, MA: MIT Press. Bubenko, J., Jr. (2007). From Information Algebra to Enterprise Modelling and Ontologies – A Historical Perspective on Modelling for Information Systems. In J. Krogstie, A. L. Opdahl & S. Brinkkemper (Eds.): Conceptual Modelling in Information Systems Engineering. Berlin: Springer, pp. 1-18.
747
Achieving System and Business Interoperability by Semantic Web Services
Butler, H. (2002). Barriers to real world adoption of Semantic Web technologies. Hewlett-Packard. Cabral, L., Domingue, J., Galizia, S., Gugliotta, A., Tanasescu, V., Pedrinaci, C., & Norton, B. (2006). IRS-III: A broker for semantic web services based applications. In I. Cruz et al. (Eds.). Proc. ISWC’06, LNCS 4273,201-214. Carlsen, S. (1997). Conceptual Modelling and Composition of Flexible Workflow Models. PhD thesis, Dept of Computer and Inforamtion Science, Norwegian University of Science and Technology, Trondheim, Norway. Carlsen, S. (1998). Action Port Model: A Mixed Paradigm Conceptual Workflow Modeling Language. Proceedings of the 3rd IFCIS International Conference on Cooperative Information Systems (CoopIS’98), pp. 300-308. Los Alamitos, CA: IEEE CS Press. Cavantzas, N., Burdett, D., Ritzinger, G., Fletcher, T., Lafon, Y., & Barreto, C. (Eds.). (2005). Web Services Choreography Description Language Version 1.0. Technical report, W3C Candidate Recommendation, 9 Nov, 2005. Retrieved 10 Feb, 2006, from http://www.w3.org/TR/2005/ CR-ws-cdl-10-20051109/ Coalition, O. W. L.-S. (2004). OWL-S 1.1 Release. Retrieved 9 Aug, 2005, from http://www.daml. org/services/owl-s/ Conradi, R., & Jaccheri, L. (1998). Process Modelling Languages. In J.-C. Derniame, B. A. Kaba & D. G. Wastell (Eds.), Software Process: Principles, Methodology, and Techniques (pp. 27-52). Berlin: Springer LNCS 1500. Curtis, B., Kellner, M., & Over, J. (1992). Process Modeling. Communications of the ACM, 35(9), 75–90. doi:10.1145/130994.130998
748
Daconta, M., Orbst, L., & Smith, K. (2003). The Semantic Web: A guide to the future of XML, Web Services and Knowledge Management. London: Wiley. Derniame, J.-C., Kaba, B. A., & Wastell, D. G. (Eds.). (1998). Software Process: Principles, Methodology, Technology. Berlin: Springer (LNCS 1500). Dijkman, R., & Dumas, M. (2004). ServiceOriented Design. A Multi-Viewpoint Approach. International Journal of Cooperative Information Systems, 13(4), 337–368. doi:10.1142/ S0218843004001012 Fischer, L. (Ed.). (2001). The Workflow Handbook 2001. Lighthouse Point, FL: Workflow Management Coalition (WfMC). Fox, M., & Grüninger, M. (1998). Enterprise Modeling. AI Magazine, 19(3), 109–121. Green, P., & Rosemann, M. (2000). Integrated Process Modelling: An Ontological Evaluation. Information Systems, 25(2), 73–87. doi:10.1016/ S0306-4379(00)00010-7 Haake, J. M., & Wang, W. (1997). Flexible support for business processes: extending cooperative hypermedia with process support. In Proceedings of GROUP’97, International Conference on Supporting Group Work. The Integration Challenge. Hayes, P. (2004). RDF Semantics. Technical report, W3C, 10 Feb 2004. Retrieved Mar 3, 2005, from http://www.w3.org/TR/rdf-mt/ Hess, A., Johnston, E., & Kushmerick, N. (2004). ASSAM: A Tool for Semi-automatically Annotating Semantic Web Services. In S.A. McIlraith et al. (Eds.). Proc. ISWC’04, Springer LNCS 3298, 320-334.
Achieving System and Business Interoperability by Semantic Web Services
Jørgensen, H. D. (2001). Interaction as a Framework for Flexible Workflow Modelling. In: C. Ellis & I. Zigurs (Eds.), Proceedings of the International ACM SIGGROUP Conference on Supporting Group Work 2001. September 30 - October 3, 2001, Boulder, Colorado, USA. p.32-41. Jørgensen, H. D. (2003). Model-Driven Work Management Services. In R. Jardim-Goncalves, H. Cha, A. Steiger-Garcao (Eds.), Proceedings of the 10th International Conference on Concurrent Engineering (CE 2003), July 2003, Madeira, Portugal. A.A. Balkema Publishers. Jørgensen, H. D. (2004). Interactive Process Models. PhD thesis, Department of Computer and Information Science, Norwegian University of Science and Technology, Trondheim, Norway. Jørgensen, H. D., & Carlsen, S. (1999) Emergent Workflow: Integrated Planning and Performance of Process Instances. In J. Becker, M. zur Mühlen, M. Rosemann (Eds.) Proceedings of the 1999 Workflow Management Conference: Workflowbased Applications, 9 Nov, Univ., Münster, Germany, pp 98-116. Klein, M., Broekstra, J., Fensel, F., van Harmelen, F., & Horrocks, I. (2003). Ontologies and Schema Languages on the Web. In D. Fensel, J. A. Hendler, H. Lieberman & W. Wahlster (Eds.), Spinning the Semantic Web: Bringing the World Wide Web to Its Full Potential [outcome of a Dagstuhl seminar] (pp. 95-139). Cambridge, MA: MIT Press. Krogstie, J. (2004). Integrating Enterprise and IS Development Using a Model-Driven Approach. In O. Vasilecas, A. Caplinskas, G. Wojtkowski, W. Wojtkowski, J. Zupancic (Eds.) Information Systems Development Advances in Theory, Practice, and Education. (Proc. ISD’04). Boston, MA: Kluwer.
Krogstie, J. (2007). Modelling of the People, by the People, for the People. In J. Krogstie, A. L. Opdahl & S. Brinkkemper (Eds.), Conceptual Modelling in Information Systems Engineering. Berlin: Springer, pp. 305-318. Krogstie, J. (2008). Integrated Goal, Data and Process modeling: From TEMPORA to ModelGenerated Work-Places. In, Johannesson and Søderstrøm, (eds.), Information Systems Engineering. IGI Publishing 2008. Krogstie, J., & Jørgensen, H. (2004). Interactive Models for Supporting Networked Organizations. In A. Persson & J. Stirna (Eds.), Advanced Information Systems Engineering, 16th International Conference (CAiSE’04), Berlin: Springer (LNCS 3084). Kunz, J., Christiansen, T., Cohen, G., Jin, Y., & Levitt, R. (1998). The virtual design team: A Computational simulation model of project organizations. Communications of the ACM, 41(11), 84–92. doi:10.1145/287831.287844 Lei, Y., & Singh, M. (1997). A Comparison of Workflow Metamodels. Paper presented at the ER’97 Workshop on Behavioral Models and Design Transformations: Issues and Opportunities in Conceptual Modeling, Los Angeles, CA. Lillehagen, F. (1999). Visual extended enterprise engineering embedding knowledge management systems engineering and work execution. IFIP International Enterprise Modeling Conference (IEMC’99), Verdal, Norway. Linthicum, D. (2003). Next Generation Application Integration: From Simple Information to Web Services. Boston: Addison-Wesley. Loos, P., & Allweyer, T. (1998). Process Orientation and Object Orientation - An Approach for Integrating UML (Technical Report). Saarbrücken, Germany: Institut für Wirtschaftsinformatik, University of Saarland.
749
Achieving System and Business Interoperability by Semantic Web Services
Manola, F., & Miller, E. (2004, 10 Feb). RDF Primer. Retrieved 15 Aug, 2005, from http://www. w3.org/TR/rdf-primer/ McGuinness, D. L. (2003). Ontologies Come of Age. In D. Fensel, J. A. Hendler, H. Lieberman & W. Wahlster (Eds.), Spinning the Semantic Web: Bringing the World Wide Web to Its Full Potential [outcome of a Dagstuhl seminar] (pp. 171-195). Cambridge, MA: MIT Press. Miles, A. (2006, February). RDFMolecules:Evaluating Semantic Web Technology in a Scientific Application. Available at http://www.w3c.rl.ac.uk/SWAD/ papers/RDFMolecules_final.doc (Feb 20, 2006) Natvig, M. K., & Ohren, O. (1999). Modeling shared information spaces (SIS). In GROUP ‘99: Proceedings of the international ACM SIGGROUP conference on Supporting group work, Phoenix, AZ, Nov 14-17, pp. 99-108. New York: ACM Press. OMG. (2000). Workflow Management Facility Specification, v.1.2. Needham, MA: Object Management Group. Opdahl, A. L., & Sindre, G. (2007). Interoperable Management of Conceptual Models, In J. Krogstie, A. L. Opdahl & S. Brinkkemper (Eds.), Conceptual Modelling in Information Systems Engineering. Berlin: Springer, pp. 75-90. Powers, S. (2003). Practical RDF. Sebastopol, CA: O’Reilly. Scheer, A.-W., & Nuttgens, M. (2000). ARIS Architecture and Reference Models for Business Process Management. In W. M. P. van der Aalst, J. Desel & A. Oberweis (Eds.), Business Process Management (pp. 376-390). Berlin, Germany: Springer (LNCS 1806).
Smith, M., Welty, C., & McGuinness, D. L. (Eds.). (2004, 10 Feb). OWL Web Ontology Language Guide. Retrieved 25 Feb, 2005, from http://www. w3.org/TR/owl-guide/ Staab, S., & Studer, R. (Eds.). (2004). Handbook on Ontologies. Berlin, Germany: Springer. van der Aalst, W. M. P. (1999). Formalization and Verification of Event-driven Process Chains. Information and Software Technology, 41(10), 639–650. doi:10.1016/S0950-5849(99)00016-6 Wegner, P., & Goldin, D. (1999). Interaction as a Framework for Modeling. In P. P. Chen, J. Akoka, H. Kangassalo, & B. Thalheim (Eds.), Conceptual Modeling. Berlin: Springer (LNCS 1565), pp. 100-114 WfMC. (1999). Workflow Management Coalition Interface 1: Process Definition Interchange Process Model (Technical report No. WfMC TC-1016-P). Lighthouse Point, FL: Workflow Management Coalition.[gs1] WfMC. (2002). Workflow Process Definition Interface - XML Process Definition Language (Technical report No. WFMC-TC-1025). Lighthouse Point, FL: Workflow Management Coalition. WSMO Working Group. (2005). The Web Service Modeling Language WSML. Retrieved Aug 20, 2005, from http://www.wsmo.org/wsml/wsmlsyntax
ENDNOtEs 1 2
3 4
5
750
http://is.uib.no/wiki/UEML/UEML http://www.w3.org/TR/rdf-syntax-grammar/ http://www.w3.org/DesignIssues/Notation3 http://www.w3.org/TR/2004/REC-rdftestcases-20040210/#ntriples http://www.dajobe.org/2004/01/turtle/
Achieving System and Business Interoperability by Semantic Web Services
6
7 8 9 10 11 12 13 14 15 16 17
18 19
http://www.w3.org/2001/ sw/BestPractices/ HTML/ 2006-01-24-rdfa-primer http://www.mindswap.org/2005/SMORE/ http://microformats.org/ http://web.resource.org/rss/1.0/spec http://www.rssboard.org/rss-specification http://www.radarnetworks.com/ http://www.w3.org/2001/sw/WebOnt/ http://lsdis.cs.uga.edu/projects/meteor-s/ http://www.wsmo.org/ http://kmi.open.ac.uk/projects/irs/ http://www.daml.org/services/owl-s/ http://lsdis.cs.uga.edu/ projects/meteor-s/ downloads/mwsaf/ http://www.w3.org/Submission/WSDL-S/ http://sdk.semanticweb.org/index.html
20 21 22
23
24 25 26
27
28
http://kmi.open.ac.uk/projects/webonto/ http://kmi.open.ac.uk/projects/ocml/ http://www.daml.org/services/ owl-s/tools. html, http:// owlseditor.semwebcentral.org/ http://staff.um.edu.mt/cabe2/ supervising/ undergraduate/ owlseditFYP/OwlSEdit. html http://alphaworks.ibm.com/tech/ettk http.//owlsm.projects.semwebcentral.org/ http://research.dnv.com/external/default. htm http://193.71.42.92/ websolution/UI/ Troux/07 /Default. asp?WebID=260&PageID=1 http://www.athenaip.org
This work was previously published in Global Implications of Modern Enterprise Information Systems: Technologies and Applications, edited by Angappa Gunasekaran, pp. 172-194, copyright 2009 by Information Science Reference (an imprint of IGI Global).
751
752
Chapter 3.13
In-House vs. Off-the-Shelf e-HRM Applications Nawaf Al-Ibraheem KNET, Kuwait Huub Ruël University of Twente, The Netherlands & American University of Beirut, Lebanon
AbstrAct Companies new to the e-HRM technologies are overwhelmed by the dilemma of choosing either the ready-made, off-the-shelf e-HRM systems, or develop their own e-HRM systems in house in order to implement the e-HR transformation. Therefore, this research was done to shed some light on the differences and similarities between off-the-shelf e-HRM systems and in-house developed ones, with regards to some elements developed in a preliminary framework, such as the implementation and development approaches, e-HRM activities they facilitated, application types and characteristics, and e-HRM outcome and benefits. This comparison provided insightful information that could help companies DOI: 10.4018/978-1-60566-304-3.ch006
make the most effective choice between the two systems. It was found through this research that factors such as continuous user involvement, effective communication, and strong change management are most considered by companies that develop e-HRM in house, while advocates of off-the-shelf e-HRM systems are most affected by success factors such as business process reengineering, planning and vision, and project management. Another finding was that increasing efficiency, providing customer-oriented service excellence, and improving self services were top goals accomplished by companies developing their e-HRM system in house. These findings, beside many other ones discovered in this research, would help companies decide which system best fits their needs and accomplish high levels of effectiveness gained from the transformation of their HR function to e-HR.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
In-House vs. Off-the-Shelf e-HRM Applications
INtrODUctION Human Resources Management has always been the first business process to use the emerging technologies of the new era. As a matter of fact, Payroll Administration is known to be one of the earliest business processes to be automated (LengnickHall and Mortiz 2003, p.365). Through the use of technology and information systems, human resources departments of companies around the world are able to use computers to log employees’ data and interact electronically with them. These functions and more are offered to Human Resources personnel and other employees through what is called the Electronic Human Resources Management, or e-HRM. Since the 1960s many firms (large and small) all over the globe have been implementing IT-based Human Resources Management applications in order to reduce the amount of associated costs. Ball (2001) pointed out that by 1998, 60 percent of the Fortune 500 companies used a Human Resources Information System (HRIS) to support daily HRM operations. The main benefits of HRISs are focused on improved accuracy, the provision of timely and quick access to information, and the savings of costs (Ngai & Wat, 2006 based on Lederer, 1984; Tetz, 1973; Wille & Hammond, 1981). For the human resources activities, e-HR has the potential to enhance efficiency by reducing cycle times for processing paperwork, increasing data accuracy, and reducing human resources workforce (Lengnick-Hall and Mortiz, 2003). Effectiveness can also be influenced by empowering both employees and managers to make better, accurate, and timely decisions (Lengnick-Hall and Mortiz, 2003). HRIS started to be more internet-technology based since the second half of 1990’s, where its aim was not only to support the HR department itself, but to target managers and employees’ effectiveness. The term electronic Human Resources Management (e-HRM) was coined and has become a dominating label for HRM services delivered
through internet-technology based applications. Ruël et al (2004) define e-HRM as a way of consciously implementing HRM practices, policies and strategies supported by or fully delivered through internet-technology based applications. Terms like Management Self-Service (MSS) and Employee Self Service (ESS) started to be used as well. The purposes of implementing e-HRM systems broadened in comparison to those connected to HRISs. During the e-commerce era in the 1990s, the term e-HRM emerged to basically refer to conducting Human Resources Management functions using the Internet (Lengnick-Hall and Mortiz 2003, p. 365). Nowadays, many organizations are implementing e-HRM within the strategic design of their core businesses. Some Organizations require the use of standardized HR management tools such as payroll, employee benefits, recruitment, training, etc. These organizations mainly refer to off-theshelf e-HRM solutions offered by third party such as Oracle, PeopleSoft, SAP, or IBM, where they perceive efficiency and fulfilment in these ready-made systems. Others require customized Human Resources Information Systems (HRIS) tailored to best fit their business’s needs. One of the key advantages of in-house developed HRISs is that they save HR staff time in dealing with the elements of the application as they already know and understand the parameters of their own software (Thaler-Carter 1998, p.22). One objective therefore becomes the main purpose of this research and that is to compare and contrast the in-house developed and off-the-shelf e-HRM systems with regards to their implementation, associated costs, usage, and effectiveness.
LItErAtUrE bAcKGrOUND An essential step in e-HRM systems implementation is the decision whether to buy an off-the-shelf e-HRM software package (in many cases part of an Enterprise Resource Planning (ERP) package
753
In-House vs. Off-the-Shelf e-HRM Applications
like Oracle/Peoplesoft or SAP) or to develop eHRM systems in-house. As Kovach et al. (2002) describe, a small company might prefer to depend on consultants and outsource all aspects of design, implementation, and operation, while a large company could prefer to opt for in-house development of e-HRM systems by their own IT department. Widely believed benefits of off-the-shelf systems in general are time-saving, proven system quality, and availability of expertise through the provider. On the contrary, disadvantages of off-the-shelf systems are the lack of customization/flexibility and dependency on vendor-expertise. In order to avoid these down-sides companies might opt for in-house development of an HRIS system. Exact figures regarding the differences/similarities between off-the-shelf e-HRM and in-house developed e-HRM, do not really exist. To take further the point issued by Kovach et al (2002), in case of e-HRM, the acceptance and usage of applications by employees and managers is critical to successful implementation and the generation of results. That makes the decision whether to ‘buy or make’ even more important, and seems to favor in-house development over off-the-shelf implementation. However funded knowledge on this issue is lacking so far. The annual CedarCrestone survey (2006) shows that many organizations lay out more expenses for outsourced solutions (mainly offthe-shelf) than they do for in-house development. Three main reasons are explained: organizations with in-house software development mostly have fragmented system operations which cause them to overlook embedded costs within distributed operations; outsourcing assures better view and control of costs; and outsourcing is a solution for organizations that struggle to control costs due to the complexity of their operations (HR Technology Trends to Watch in 2007 2007, p. 1). On the other hand, the CedarCrestone survey (2006) notes that some respondents sense that in-house HR solutions positively affect employees’ productivity and can be easily and swiftly integrated into the current
754
system, while off-the-shelf software would cause difficulties during the integration phase. In this study, the authors will concentrate on the issue of the differences and similarities between off-the-shelf and in-house developed e-HRM implementations. We define implementation broadly, from the first initiation to the evaluation of results of e-HRM, and semi-technical (i.e. type of applications/functionality/design) and organizational issues (i.e. change management, HR restructuring, etc). Technology plays the role of enabler that helps HR to deliver the workforce support and management based on business needs (Gueutal & Stone 2005, p. 2). New web technologies have enabled HR to perform certain paperless tasks that used to consume time and human resources in the old ages of HR. Furthermore, technology-enabled HR functions have evolved into human capital management (HCM), as Gueutal and Stone (2005, p. 2) agree with Ruël et al. in stating that HCM and e-HR are the responsibilities of everyone in an organization, from employees to executives. Nevertheless, Hendrickson (2003, p. 383) stresses the fact that this change in information technology has created significant challenges for HR professionals to get updated with the latest information technology while simultaneously transforming HR processes to e-processes. From these definitions arises a general understanding of what e-HRM is. In this research, e-HRM is the management of an HR network that connects employees, managers, and HR staff through web-technology-enabled channels that provide them with an electronic access to human resources transactions, strategies, policies, and practices.
benefits, Goals, and Objectives of e-HrM The fast changing world of business stipulates organizational flexibility, capability, and rapid response in order to succeed (Lepak & Snell
In-House vs. Off-the-Shelf e-HRM Applications
1998, p. 215). Therefore, e-HRM must attain objectives such as strategy-oriented focus, flexibility, cost containment, and service-providing excellence (Lepak & Snell 1998, p. 215). Strategic orientation can be achieved by concentrating on the transformational HR activities mentioned in section 2.2, while flexibility can be reached by updating HR processes, policies, and practices to counter the rapidly changing business world. Efficiency, on the other hand, can be achieved by many ways such as staff reduction, elimination of manual processes, and timeliness. However, cost reduction can only be considered beneficial if there is no loss of effectiveness or deterioration of customer service excellence (Schramm 2006, p. 3). Hendrickson (2003) agrees that efficiency is one of the goals of implementing e-HRM that can be rendered by increasing number of HR transactions and timeliness without affecting needed resources. Another benefit of using e-HRM is the increased effectiveness of human resources management through the use of technology which simplifies HR processes and enhances performance (Hendrickson 2003, p. 385). Beer et al. (1984) distinguish four different benefits or outcomes from e-HRM: commitment, competence, congruence, and cost effectiveness. Commitment is achieved by motivating employees to interact with managers cohesively during changes to achieve organizational goals (Ruël et al. 2004, p. 369). High competence is when employees learn new tasks and gain competitive skills to enhance their contribution, effectiveness, and creativity. Congruence is also another benefit of implementing e-HRM that involves structuring the business from the inside out in the best interest of stakeholders (Ruël et al. 2004, p. 369). Cost effectiveness aims at increasing the level of economical value based on the tangible and intangible benefits produced by the amount of resources spent (i.e. money, time, human resources, etc). A cost effective e-HRM system, however, is one that provides adequate e-HRM services for the reasonable HR-related
costs it incurs during and after the implementation stage such as employee turnover, resistance, and confusion. A typical agreement by most authors proves that efficiency is a top goal considered by many organizations that use e-HRM. Cost reduction is also agreed upon to be the main benefit of e-HRM. Other benefits might not be as noticeable or tangible as efficiency. Lengnick-Hall and Moritz (2003) divide the outcome of e-HRM into tangible and intangible benefits. Tangible benefits include process and administrative cost reduction, staff reduction, transaction processing speed-up, elimination of information errors, and improvement in tracking and controlling HR activities (Lengnick-Hall & Moritz 2003, p. 369). Intangible benefits are mostly associated with the indirect effect of e-HRM on the HR function. Such benefits are improvement in employee productivity and morale, decision making, and information sharing. Innovation enhancement and time-tomarket acceleration might also be considered as intangible benefits of e-HRM (Lengnick-Hall & Moritz 2003, p. 369).To summarize this section, the author will consider the following main goals of implementing e-HRM within organizations: • • • •
Increase efficiency by reducing cost and eliminating unnecessary functions. Provide management and employee selfservice mechanisms. Improve HR’s strategic focus. Achieve client service excellence.
On the other hand, different outcomes can be gained from implementing e-HRM within an organization. For this research, the following outcomes will be considered: • • • •
Commitment. Competence. Cost Effectiveness. Congruence.
755
In-House vs. Off-the-Shelf e-HRM Applications
e-HrM Applications e-HRM applications (as depicted in Figure 1) represent the outmost layer of the human resources management model suggested in this research. This model suggests that the outer layers comprise all of the elements of the inner ones. Therefore, the application layer includes all the basic requirements and functions of HRM and e-HRM including its types, goals, and benefits. E-HRM applications are wide in range and they provide a variety of automated HR activities that enchant the HR function with flexibility and ease of use. In an exclusive survey by IOMA (2002) professionals reported that e-HRM applications which are most preferred by users are those that possess high levels of integration and processing capabilities, user friendliness, robustness, and reporting. Furthermore, e-HRM studies with emphasis on the Technology Acceptance Model (Davis et al. 1989) has exhausted the use and outcome of e-HRM applications and determined that they are mainly dependant on the usefulness and ease of use of such applications (Ruta 2005). On one hand, usefulness is defined by Davis et al. (1989, p. 985) as “the prospective user’s subjective probability that using a specific application system will increase his or her job performance within an organizational context”. This means in order for the e-HRM system to be perceived by users as useful, positive impact on the organizational performance of an employee must be observed during or after the use of the e-HRM applications. Davis et al. (1989, p. 985), on the other hand, define the perceived ease of use as “the degree to which the prospective user expects the target system to be free of effort.” E-HRM applications could be perceived by employees as “easy to use” if it doesn’t require effort to be operated. The combination of e-HRM definition produced in section 2.1 with the general understanding of e-HRM applications mentioned here yields a predominant meaning for e-HRM applications:
756
E-HRM applications are the software programs that offer a useful and easy-to-use electronic medium, through which the e-HRM goals are accomplished by performing different types of human resources activities electronically to yield the desired outcome and benefits. This definition explains the layered approach shown in Figure 1 but, nevertheless, it doesn’t explain the types of these software applications. The following section suggests a few types of e-HRM applications that have been discussed in previous literature.
types of e-HrM Applications e-HRM applications can be specified under different categories according to the HR activities they facilitate electronically. Seven types have been identified by Florkowski and Olivas-Luján (2006) and divided into two predominant groups: software applications that target HR staff as end users, and those that target internal staff (managers and employees) as end users. The types that target HR staff are HR functional applications and integrated HR suite applications. Those that focus on managers and employees as end users are Interactive Voice Response (IVR) systems, HR intranet applications, self-service applications, HR extranet applications, HR portal applications Figure 1. e-HRM layers
In-House vs. Off-the-Shelf e-HRM Applications
(Florkowski & Olivas-Luján 2006, p. 687).The CedarCrestone annual report (2007), on the other hand, divides over 30 e-HRM applications into the following groups: Administrative applications, including core HR functions such as payroll, benefits, and HR management system. This category handles most of the operational type of HR activities—mentioned under section 2.2—that are administered by the HR staff. The report identifies three major activities that belong to this group as payroll, HR management system, and benefits administration (CedarCrestone annual report 2007). Employee and manager productivity applications which provide self service transactions that aim at improving service delivery, reducing cost, and enabling employees and managers to concentrate more on core processes (CedarCrestone annual report 2007). These applications also provide access to operational HR activities performed only by managers and employees such as benefit-related self service (BSS), simple management reporting (SMR), pay-related self services (PSS), employee self services (ESS), time management self service (TMSS), total benefits statements (TBS), HR-oriented help desk, and manager self service (MSS). Strategic HCM applications, such as talent management, training enrollment, competency management, e-learning, compensation, performance, succession planning, and career planning. These applications transform relational HR activities into their electronic version. Business intelligence applications comprise HR data warehousing, operational reporting, HR scorecard, and analytics that help develop the strategic focus of an organization. These applications are the protagonists of e-HRM. They facilitate the transformational activities that help develop the strategic orientation of the HR function. Looking at the adoption level of e-HRM applications, Financial Services seem to top other businesses in leading the e-commerce market of human resources management by automating most
of its HR activities with the help of the application sets mentioned above (CedarCrestone annual report 2007). On the contrary, CedarCrestone reports that the least business categories to adopt e-HRM technologies are Retail and Public Administration. The top players in e-HRM application software market are Oracle/PeopleSoft, ADP, SAP, and Kronos, as most of the applications mentioned above are supported by these giants and favored by a plethora of HR departments around the world.
e-HrM Applications Implementation and Development Approaches The implementation and development of the e-HRM application doesn’t require a set of critical success factors or approaches different than those constituted by Enterprise Resource Planning (ERP) projects or large enterprise portal projects, as e-HRM is considered a part of ERP implementation itself. These large projects differ from traditional information system projects in scale, scope, complexity, organizational changes, costs, and need for business process reengineering (Remus 2007, p. 538). As a result, technical and business difficulties during the implementation and development of these systems (including e-HRM systems) are imminent. Literature has it that these difficulties are widely spread, but only fragmented research was done regarding the critical success factors and approaches to ERP implementation (Nah et al. 2001, p. 286). These approaches were narrowed down to the ones shown in Table 2, which have been previously identified by academic articles as shown in the table.
User Involvement User involvement in the early stages of the implementation and development phases of the e-HRM system allows users to make adjustments to the system to satisfy their needs. Consequently, organizational resistance to the new changes implied by the use of the e-HRM applications is minimized
757
In-House vs. Off-the-Shelf e-HRM Applications
Table 1. e-HRM approaches Lee & Lee (2001)
Nah et al. (2001)
User involvement
Business process reengineering
Planning & strategy
Training & education
Al-Sehali (2000)
Remus (2007)
Change management
Top management support
Effective communication
Project management
Table 2. Research constructs Constructs/Variables
Description
Overview
This strategy is used to start a case study research by collecting general information about the organization to be researched and the e-HRM system it uses.
Implementations and Development Approaches
The factors affecting the implementation and development of e-HRM system and how the company deals with them
E-HRM Goals
The main goals of implementing the e-HRM system and how they coincide with corporate strategy
E-HRM Activities
The HR-related activities offered by the new e-HRM system and how employees work with them
E-HRM Application Characteristics
The characteristics of the e-HRM software application as defined by ISO 9126 standards
E-HRM Outcomes
The four C’s of HR: commitment, competence, congruence, and cost effectiveness and how they are impacted after the implementation of the e-HRM system
HRIS Benefits
Usefulness and ease of use of the new e-HRM system
and customer satisfaction is increased (Lee & Lee 2001, p. 208). Change management processes in this case are minimized and easily controlled.
Business Process Reengineering Achieving benefits through the implementation of an enterprise system is impossible without the inevitable alignment of processes and activities with the new system requirements (Remus 2007; Bingi et al. 1999). This means that when a company implements a new e-HRM system, some of the HR processes must be reengineered in order
758
for the e-HRM system to be more effective. Such reengineering mechanism is applied when transforming HR manual processes to paperless forms. Reengineering should begin before choosing the software system to make sure changes are accepted by the stakeholders and the processes can actually be aligned with the new system (Nah et al. 2001, p. 294).
Planning and Vision A solid business plan needs to be defined in order for the ERP project to be directed toward
In-House vs. Off-the-Shelf e-HRM Applications
the proposed strategic and tangible benefits, resources, costs, risks, and timeline (Buckhout et al. 1999). This means for e-HRM implementation to be successful, a plan must be agreed upon by the project manager or the responsible parties to follow during the project life cycle. This plan will guarantee the alignment of the e-HRM goals and strategy with the HR and corporate strategies to ensure maximum effectiveness, integration, and alignment. Lee and Lee (2001) insist that good planning consumes a considerable amount of time prior to implementation. This ensures that the e-HRM system is thoroughly exploited and efficiently implemented to coincide with the corporate strategies.
Training and Education Since the e-HRM system offers new methods of processing transformed or new HR activities, proper training must be given to all users of the system. This becomes crucial since the new interface provides functionality that has never been used before and needs to be related to the newly reengineered business processes (Remus 2007, p. 544). Education is the catalyst that brings the knowledge of the users up to the point where they can familiarize themselves with the new e-HRM system quickly and sufficiently.
Change Management Managing change within the organization could be a full time job by itself, as it requires the management of people and their expectations, resistance to change, confusion, redundancies, and errors (Remus 2007, p. 541). For the e-HRM to be successfully implemented, the organization should realize the impact of this new change on employees, managers, and HR staff and understand its dimensions in order to manage the effects with a corporate strategy that is open to change. Furthermore, emphasis on quality, computing ability, and willingness to accept the new technology
would positively assist the implementation effort (Nah et al. 2001, p. 293). Training and education is a critical step in managing change itself, as employees must be educated about the new system to understand how it changes business processes (Nah et al. 2001, p. 293).
Top Management Support One of the most critical success factors for implementing an ERP system is the support and involvement of top managers in the project during its life cycle (Al-Sehali 2000, p. 79). In order for e-HRM implementation to be successful, top managers have to approve and continuously support the responsible parties during the implementation stage to make sure no obstacles prevent or delay the progress of the project. Also, an executive sponsor should be appointed to coordinate, communicate, and integrate all aspects of the project between the development team and top management (Remus 2007, p. 544). The executive sponsor should communicate, integrate, and approve the shared vision of the organization and the responsibilities and structures of the new e-HRM system (Nah et al. 2001, p. 291).
Effective Communication Interdepartmental communication as well as communication with customers and business partners is a key element in the success of implementing an ERP system (Remus 2007, p.544). Communication helps employees and involved parties better understand the new e-HRM system to keep up with the development and implementation stages of the project. Employees should also be informed in advance the scope, objectives, activities, and updates implemented by the new system in order to meet their expectations (Nah et al. 2001, p. 291)
Project Management Managing the implementation and development of e-HRM system is a crucial step toward suc-
759
In-House vs. Off-the-Shelf e-HRM Applications
cessful results. The scope of the project must be clearly defined, including aspects such as the amount of systems implemented, involvement of business units, and the amount of business process reengineering needed (Nah et al. 2001, p. 292). A company must assign a project manager to lead the project of developing and implementing an eHRM system professionally according to profound business rules. The project itself must have clearly defined business and technical objectives and goals corresponding to the project deliverables (Remus 2007, p. 543). Such E-HRM goals (mentioned in section 2.3) are embedded within the different e-HRM activities (mentioned in section 2.2) as part of the project management scheme.
recent studies on e-HrM Application Implementation An essential step in e-HRM implementation and development is the decision whether to buy an off-the-shelf e-HRM systems (in many cases part of an Enterprise Resource Planning (ERP) package such as Oracle/Peoplesoft or SAP), or to develop such systems in-house. As Kovach et al. (2002) describe, a small company might prefer to depend on consultants and outsource all aspects of design, implementation, and operation, while a large company could prefer to opt for in-house development of e-HRM applications by their own IT department. Widely believed benefits of commercial off-the-shelf (COTS) systems in general are time-saving, proven system quality, and availability of expertise through the provider. Furthermore, COTS systems help reduce the amount of time and resources used during the design and implementation phases of the software project, as the software is already professionally designed and implemented to fit most of customer’s needs (Amons & Howard 2004, p. 50). Silverman (2006) offers a few reasons why most organizations opt for COTS solutions. First, avoiding lengthy and complex software design and development cycles (by using COTS solutions) saves time, money,
760
and human resources. Second, risk is minimized by the fact that COTS systems are professionally provided and supported by experienced entities. Third, COTS vendors take responsibilities of supporting and maintaining the software, which leads to reducing the cost of supporting the software itself. Fourth, COTS systems are virtually up to date as vendors keep investing in developing the software itself (Silverman 2006, p 10). It is also known as a fact that most IT managers first evaluate COTS software when money and time-to-market are at the top of the priority list (Traylor 2006, p 20). This evidence makes it clear that off-the-shelf software saves on time and money resources. On the contrary, disadvantages of off-the-shelf systems are illustrated by the lack of customization/flexibility and the dependency on vendor-expertise. Amons and Howard (2004) insist that buying an already-made software packages will increase time and other resources spent on the integration and deployment phases of the IT project. Indeed, COTS systems need to be perfectly aligned and integrated within the practices of the organization, and deployed in a matter that best fits the company’s vision. One particular disadvantage of COTS is of an intangible nature; when most companies adjust their practices to conform to the COTS software standard offerings they lose their competitive advantage, as their technologybased processes become replicas of each other (Carr 2003, p. 6). For e-HRM, this means that companies cannot strategically position their HR activities or improve its strategic orientation by the use of electronic means. Although it is believed that off-the-shelf software usually contains industry’s best practices, but Yeow and Sia (2007) posit that the notion that “best practices” are context-free and can be conveniently acquired, stored, used, and transferred across organizations is questionable. For example, implementing off-the-shelf ERP systems such as Oracle, PeopleSoft, and SAP, in the context of non-US/European and non-private organizations (i.e. governmental institutes in Kuwait) can
In-House vs. Off-the-Shelf e-HRM Applications
be quite challenging and might not even fit the environment’s needs. Any discrepancies between the organization’s needs and the off-the-shelf software’s features will most likely affect the overall project success (Lucas et al., 1988; cited by Yeow and Sia 2007, p. 2) In order to avoid these down-sides, companies might opt for in-house development of an HRIS system. Building e-HRM applications in house requires a special team of developers that is fully skilled and dedicated to do the job of designing and implementing the software. One of the reasons companies choose to develop their HRIS in-house is that they can design the software precisely to fit the organizational goals, as they know their own processes better than anyone else (Zizakovic 2004, p. 4). This way companies gain the competitive advantage and have full control of their proprietary software. Pearce (2005) provides three key elements for software implementation and development. First, organizations must consider the function of the IT application (i.e. e-HRM application) to determine the level of standardization required by the operations of the application and whether COTS software can provide these standard operations (Pearce 2005, p. 93). For example, e-HRM activities such as payroll and benefits can be suitably offered by COTS systems. Second, organizations must consider the life cycle of the application and how it will be modified, updated, maintained, and supported over the years to come (Pearce 2005, p. 94). This consideration helps organizations to set forth a long-term strategic plan for its e-HRM practices in order to continue sharpening its competitive edge. The third key element in deciding on the best software implementation strategy is to consider the return on investment, or ROI. Pearce (2005) insists that companies must practice the strategy of weighing the associated costs of implementing and developing different software methods against the potential outcome of the IT project itself.
To take further the point issued by Kovach et al (2002), in case of e-HRM the acceptance and usage of applications by employees and managers is critical to successful implementation and the generation of results. That makes the decision whether to ‘buy or build’ even more important, and seems to favor in-house development over off-the-shelf implementation. However funded knowledge on this issue is lacking so far. The annual CedarCrestone survey (2006) shows that many organizations lay out more expenses for outsourced solutions than they do for in-house development. Three main reasons are explained: organizations with in-house software development mostly have fragmented system operations which cause them to overlook embedded costs within distributed operations; outsourcing assures better view and control of costs; and outsourcing is a solution for organizations that struggle to control costs due to the complexity of their operations (HR Technology Trends to Watch in 2007 2007, p. 1). On the other hand, the CedarCrestone survey (2006) notes that some respondents sense that in-house HR solutions positively affect employees’ productivity and can be easily and swiftly integrated into the current system, while off-theshelf software would cause difficulties during the integration phase.
theoretical Framework The suggested framework for this research is depicted in Figure 2. This framework combines the main elements discussed in the literature review. Since studies on the different e-HRM applications are scarce, this framework is used primarily to study different elements of implementing software applications and to focus predominantly on implementing e-HRM software applications and the effect they might bring onto the HR function. This framework suggests three dependant stages of e-HRM systems’ implementation and deployment. First stage is the e-HRM software
761
In-House vs. Off-the-Shelf e-HRM Applications
Figure 2. Theoretical framework
implementation and development approaches, which is affected by the type of system used by the organization—whether it is a commercial offthe-shelf e-HRM system or one that is developed in house. Another factor affecting this stage is the predetermined e-HRM goals as suggested in the literature review. E-HRM goals constitute the implementation and development approaches and set a course of action to be followed during all three stages. The second stage consists of the main elements of e-HRM application deployment such as e-HRM activities handled by the system, application types, and application characteristics. These application criteria help an HR department to provide the necessary e-HR functions to accomplish the goals mentioned in the first stage and meet the needs of the stakeholders. Note that each of the e-HRM activity types fit under at least one of the e-HRM application types. Furthermore, e-HRM application and activity types, and the application characteristics are interdependent (represented by the up-down arrow). For example, relational e-HRM activities such as training, recruitment, and performance management depend mainly on the level of reliability and usability of the e-HRM system, while functionality and efficiency of the e-HRM system might change with the type of eHRM activity types. The last stage is not in any
762
means of less value than the other stages. It represents the outputs and benefits an HR department aims for during the e-HRM transformation. These outputs consist of the “four C’s of HR”—namely commitment, competence, cost effectiveness, and congruence—and user benefits suggested by the Technology Acceptance Model—such as usefulness and ease of use.
MEtHODOLOGy Deciding which research strategy to adopt depends solely on the nature of the research question and how the author plans to answer it. The strategy consists of a plan with clear objectives derived from the research question and specifies the sources from which to collect data (Saunders et al. 2003, p. 90). Since the aim of this research is to compare the two types of e-HRM system implementation methods, the author assumes a comparative, descriptive case study strategy. A Comparative study scrutinizes the differences and similarities between the two e-HRM implementation methods, whilst a descriptive study attempts to construct a precise profile of the different e-HRM system implementation methods (Robson 2002; cited in Saunders et al. 2003, p. 97). Furthermore, the descriptive study is a perfect strategy
In-House vs. Off-the-Shelf e-HRM Applications
that helps with the proliferation of the inductive research approach. This strategy helps to reach a theoretical conclusion about the outcome and level of effectiveness induced by both implementation methods. A case study research strategy implies the use of detailed and empirical investigation within the real context of a particular phenomenon (Cassell & Symon 2004, p. 323). While the case study strategy provides a good challenge and a source of new hypotheses of an existing theory, it can also help with the creation of a new theory using different means of data collection methods such as questionnaires, interviews, observation, and documentary analysis (Saunders et al. 2003, p. 93). This, indeed, makes a perfect strategy that fits the research philosophy and approaches suggested in the previous sections, as Cassell and Symon (2004) agree that the case study strategy begins with a primitive framework which then will be reproduced or developed under a new theoretical framework that makes sense of the collected data and can be systematically examined for plausibility. This strategy also facilitates the collection of data through multiple research instruments such as participant observation, direct observation, interviews, focus groups, and documentary analysis (Cassell & Symon 2004, p. 325). Consequently, this means that a variety of research instruments can be utilized to understand how the different e-HRM implementation methods affect the HR function. All in all, case study strategy can be considered a prudent choice for this research as it offers a flexible environment for the researcher to work with and attempts to collect as much information as necessary to conclude how the two E-HRM systems can be compared and contrasted.For the purpose of this research, two descriptive case studies were conducted on local organizations that have been chosen according to the e-HRM system they used: •
Case Study 1: This case study was conducted on a well known company that uses a commercial off-the-shelf e-HRM system
•
by Oracle, called “Oracle ERP”. The reason this company was chosen because of the amount of experience it had with the e-HRM system and the strong reputation it possessed in the Kuwaiti marketplace. Case Study 2: The second case study was conducted on a governmental institute that had gained its competitive advantage mainly through leveraging its IT capabilities to provide the latest technological trends to its customers. The reason this organization was chosen is that it was one of the few local companies that had developed their e-HRM system in house.
The main constructs used in this research are shown in the table below. These variables can then be used to create a set of questions that help the researcher with gathering appropriate research data for analysis. This process is called the “operationalization of construct” and is explained in the next section.
Operationalization of constructs Boiling down the constructs into research questions helps the researcher with the design of the appropriate research instruments to be used to collect and test enough data about the commercial off-the-shelf and in-house developed e-HRM systems. By doing so, the constructs are “operationalized” so that they can be measured accordingly. Table 3 shows the operationalization of constructs mentioned in the previous section.
research Instruments Case study is a strategy that includes multiple methods (or instruments) suited to research questions that require detailed understanding of a particular phenomenon due to the large amount of data collected in the context (Cassell & Symon 2004, p. 323). For this research, the following data collection instruments were used:
763
In-House vs. Off-the-Shelf e-HRM Applications
Table 3. Operationalization of constructs Constructs/ Variables
Definition/Dimension
Operationalization of Constructs Overview
Background
General information about the company
- Business establishment - Number of employees - Type of Business
Current HR practices
Where the HR practices currently stand
- Main function/mission of HR department - Level of employee/manager involvement
E-HRM Application
The e-HRM application currently in use
- e-HRM system brand/name - Reason to chose this system - criteria used to make the decision
E-HRM Activities
HR activities utilized/offered by the new e-HRM system
- HR activities handled by the system - New HR activities introduced by the system - level of employee/manager involvement in e-HRM activities
Implementation & Development Approaches User Involvement
Involving employees and managers in the e-HRM implementation and development process
- Type of users involved - Stage(s) at which users were involved - Level of involvement
Business Process Reengineering
Reengineering/redesigning of HR business processes
- HR processes reengineered - HR processes eliminated by BPR - New HR processes introduced
Planning & Vision
The planning process and strategy used to implement the e-HRM system
- Type and quality of business plan - Plan outline - Project strategy alignment with corporate strategy and vision
Training & Education
Plan taken to train and educate users of the new e-HRM system
- Level of training/education - Training material - Training delivery
Change Management
How the change/conflict implied by the new e-HRM system is managed
- Resistance, conflict, or confusion by staff/users - Handling of conflict - Corporate strategy toward change
Top Management Support
The support of the e-HRM system implementation by top managers
- Level of top management involvement
Effective Communication
Inward/outward communication needed during the implementation phase
- Level and type of communication between different parties involved - communicating project scope, objective, activities, updates
Project Management
Managing and administering the e-HRM implementation project
- Implementation management - Responsible PM team - Project milestones
e-HRM Goals Increase Efficiency
Make HRM more efficient by reducing time, costs, human resources, etc
- Achieving cost reduction - Cost, time, staff, other resources reduction - Processes eliminated to achieve efficiency
Provide Self Service
Provide a mechanism for employees and managers to perform HR functions
- Employee Self Services (ESS) - Manager Self Services (MSS)
Achieve Service Excellence
Provide HR high quality services to employees and managers
- Customer orientation of HR department - How customers perceive the quality of services
Improve Strategic Focus
The strategic orientation and focus of the organization should be improved through the use of the e-HRM system
- The effect of the e-HRM implementation on HR strategy - Changes made to improve HR strategy
continues on following page
764
In-House vs. Off-the-Shelf e-HRM Applications
Table 3. continued Constructs/ Variables
Definition/Dimension
Operationalization of Constructs
e-HRM Activities Operational Activities
Routine HR tasks such as payroll, attendance, vacation, etc
- Transformed operational activities - New operational activities introduced by e-HRM - Effect of e-HR operational activities on HR function
Relational Activities
Internal and external relational activities such as recruitment and selection, training, performance management, etc.
- Transformed relational activities - New relational activities introduced by e-HRM - Effect of e-HR relational activities on HR function
Transformational Activities
HR activities aimed at improving the strategic orientation of the HR department such as knowledge management, competence management, strategic redirection, downsizing, upsizing, rightsizing, etc
- Transformed transformational activities - New transformational activities introduced by e-HRM - Effect of e-HR transformational activities on HR function
e-HRM Application Characteristics Functionality
How the e-HRM system functions with regards to suitability, accuracy, interoperability, and security
- Fulfilment of stated needs - Appropriate set of HR functions - Expected results - Information and data security
Reliability
How reliable the e-HRM system in delivering HR needs
- System maturity - Fault tolerance - Recoverability
Usability
How easy is it to understand, learn, and operate the e-HRM system
- Understandability - Learnability - Operability - Attractiveness
Efficiency
Performance of the e-HRM system with regards to time and other resources utilization
- Time utilization - Resources utilization
Maintainability
The stability of the e-HRM system when analyzed, changed, and tested
- Analyzability - Changeability - Stability - Testability
Portability
How the e-HRM system can adapt new requirements, be installed in different environment, coexist with other software programs, and can be replaced
- Adaptability - Installability - Co-existence - Replaceability
Commitment
Employees commitment to organization due to the implementation of the new eHRM system
- Change in level of employee commitment to organization
Competence
Development of employees competencies due to the implementation of e-HRM
- Influence of e-HRM on employee competencies
Cost Effectiveness
Benefits of implementing e-HRM such as providing adequate e-HRM services should outweigh costs such as employee resistance and turnover
- Cost of delivering e-HRM practices and services such as staff turnover and resistance - Benefits versus costs
Congruence
The effect of e-HRM on the level of congruence between employees’ own goals and those of the organization
- Relationship between managers and employees - Relationship between employees
e-HRM Outcomes
HRIS Benefits Usefulness
How useful the e-HRM system is in enhancing employee performance
- Enhancement of employee performance
Ease of Use
Easiness of performing e-HRM activities
- Effort needed to perform e-HRM tasks
765
In-House vs. Off-the-Shelf e-HRM Applications
Semi-structured interviews: These types of non-standardized interviews use list of questions that vary between interviews according to the context (Saunders et al. 2003, p. 246). These interview questions are the outcome of the operationalization of constructs process mentioned in the previous section. Table 4 shows the position of the interviewees and the related questions used during the interviews. Documentary analysis: Company documentation of the e-HRM system implementation is analyzed to understand how the system works. This data, if available, provides a secondary source of information that could potentially assist the researcher to reach a final conclusion about the research question. Participant’s observation: This method allows the researcher to observe first-hand the day-to-day e-HRM experiences, activities, feelings, and interpretations of the users in order to collect information related to some of the constructs mentioned in the previous section (Cassell & Symon 2004, p. 154). Group interviews: The purpose of group interviews is to collect data from feedbacks and interpretations of multiple interviewees (with similar experiences) during one gathering. An advantage of the group interviews is to hear different opinions of the phenomenon at the same time (Cassell & Symon 2004, p. 143). This gives the researcher an opportunity to observe a
•
•
•
•
sufficient amount of different feelings and experiences about the e-HRM system at the same time.
sampling Methods The criteria for selecting case study samples were mainly based on the type of e-HRM system used by an organization—whether it is off-the-shelf or in-house developed e-HRM system. The author also wanted to conduct intensive, comparative, and descriptive investigations of both e-HRM systems and, therefore, selected only two case studies to concentrate on. Secondly, the requirement to collect large amount of data to analyze the systems put a constraint on the author to select samples with vast experience in the context of electronic HR. This, indeed, narrowed down the search for perfect samples to those organizations with a minimum of 2 years of e-HRM experience. Thirdly, the two sample organizations operate in different markets and have different cultures, as to generate more interesting and variable results. For example, the first sample organization was a mid-size, privatelyowned company with a market culture that is profit oriented, while the second sample organization was a government-owned entity with a culture that is more oriented toward bureaucracy. Fourthly, the size of the sample organizations was not a major criterion; however, it gave the organization more of an initiative to implement an e-HRM system. For example, middle to large size organizations feel more pressure to implement an HR system that would keep human resources management under control. Lastly, technologically advanced
Table 4. Interviewees & related questions Interviewee position
Related questions (Appendices)
HR manager / administrator
A1, A2, B2-B5, C1-C5, D1-D4, G1-G2, G4-G5
End users (employees & managers)
B1, B7, C2-C3, F3-F4, G1-G2, G4, H1-H2
Project Manager of e-HRM project
A3-A4, B2-B9, C1, C4-C5, D1-D4, G1, G3-G5
IT personnel
A3-A4, B1-B2, B8, C2-C3, D1-D4, E1-E5, F1-F5
766
In-House vs. Off-the-Shelf e-HRM Applications
organizations were preferred by the author as they are more likely to adopt e-HRM technologies at early stages of their life cycle.
Data Analysis Methods Intensive interviews were first conducted at both sites and recorded on a digital recorder, and then transcribed into separate sets of data for each case. The data was then categorized into multiple sections according to the operationalization Table 3.
FINDINGs The data analysis starts with transcribing the recorded interviews to capture as much necessary data as possible. The transcripts are then studied and paragraphs rephrased and rearranged under the corresponding instruments depicted in the framework. The following sections contain those analyzed transcripts from both case studies, where case 1 represents the company that uses off-the-shelf Oracle ERP, and case 2 represents the governmental institute that has developed its own e-HRM system in house. Data collected from each case is displayed side by side according to the theoretical instruments to show a clear comparison between the two different systems.
e-HrM Overview The aim of this section is to explain some of the general issues associated with the implementation of the e-HRM systems of both cases. The overview allows the researcher to get a feel of the researched systems before getting into the detailed interviews that would confirm or refute the propositions suggested in the theoretical framework. Case 1: The implementation of Oracle ERP system started in 2004. Before that, in 2001, an initiative from the Board to perform a total re-
structuring and business process reengineering took place. The aim of this BPR was to transform all business processes into the IT era, including the transformation of the HR practices into e-HR technologies. The main steps of reengineering were diagnostics, analysis, and implementation. HR reengineering (i.e. transformation to e-HR using Oracle ERP) was 30% of the whole BPR process of the organization. It took about 3 years to progress through the diagnostics, analysis, and implementation stages, and about one year to complete the transformation of the HR function. In order to implement such a big change within a large organization, the bureaucratic and centralized culture needed to be changed as well. This culture was changed to reflect the open environment that accepted change and forward movement. As for why this organization opted for Oracle ERP, it was recommended by the Chairman to choose this system as the best offer because of its flexibility, changeability, adaptability, and reputation between other vendors such as SAP and Anderson. The second reason was that the IT department at this company didn’t have the qualified resources to implement such a fully integrated enterprise system that could support the new structure and culture. As a matter of fact, the IT department was mostly outsourced to a third party that was able to arrange and manage the e-commerce transformation. Oracle ERP needed lots of customizations to remove unwanted processes such as taxation and add or modify the HR processes to fit the company’s HR needs. About 80% of manual work was eliminated by the new system. The BPR, along with the ERP implementation, has caused the reduction of manpower from 900 to approximately 200, as most processes were eliminated, reduced, or outsourced. The other 20% of manual and paperwork cannot be changed due to the nature of the work that requires to be done in professional and manual way such as memorandums and official letter. Case 2: The e-HRM system was developed by the Systems Development team in this or-
767
In-House vs. Off-the-Shelf e-HRM Applications
ganization’s IT department. The reason behind developing this e-HRM system in house was that there was no system out in the market that could fulfill the complex HR requirements implied by the country’s laws and by this governmental institute. The IT department manager indicated that if they had chosen an off-the-shelf e-HRM system, customization would have cost them incredible amount of time and money. Second, this system was developed and implemented in order to fit it to the user’s needs and not to adjust the user to the system’s offerings. The system development started in 2002 and took about 2 years to complete. It is totally web based using Oracle database, Microsoft .NET framework, and object oriented programming methods. Long before that, in 1976, there was an initiative by top management to automate most of the HR processes in order to minimize time and associated costs. Paperwork was costing the organization lots of time and money to store the voluminous data in multiple cabinets. Also, data integrity and security was a top priority and major concern that would justify the need to transform all HR processes into the e-commerce era. At that time, a mainframe system called “Tandem” was used to build the infrastructure that would support the renovated business. The current system was therefore developed and implemented in accordance to the vision of top management which encouraged the IT department to integrate the latest and most flexible and reliable electronic technologies. This upgrade required the replacement of the old mainframe system with the more flexible and compact Microsoft system. The e-HRM system provides user with many facilities such as attendance, which is totally secured and tied up to a biometric system that scans users fingerprints for authentication. Also, the main HR functions provided by COTS systems have been extracted and implemented within this in-house developed system, with a touch of flexibility to fit those functions to the organizational needs.
768
Presentation of results While the data collected might specifically describe the contextual situation of e-HRM systems used by these two cases, some general remarks about the differences and similarities between offthe-shelf and in-house developed E-HRM systems can be drawn. Referring back to the main question of this research—which compares off-the-shelf with in-house developed e-HRM with regards to the implementation and development approaches, e-HRM goals, e-HRM activities they facilitate, application types and characteristics, and e-HRM outcomes—one can provide conclusive evidence about the differences and similarities at hand. In general, organizations transform their HR function to e-HR in the hope to reduce the amount of time spent on processing HR activities. The automation of HR processes, therefore, is a necessary requirement and a major step in the transformation process, as mentioned by Lengnick-Hall and Moritz (2003, p. 367). A minimum of 90% of HR’s paperwork at both cases has been eliminated and/ or replaced by automated functions. It is also true that companies evaluate the capabilities of their IT department and accordingly decide whether they can develop and support enterprise-quality software in house. Furthermore, the point of Pearce (2005) about the organizations’ concern of whether off-the-shelf software can provide standard e-HRM functions is touched upon by both cases. On one hand, the first case saw that Oracle ERP would provide just about the right e-HRM functions with minimal customization needed. On the other hand, the second case believed that no single COTS system could provide standard e-HRM functions that best fit their business needs. Also, to confirm the point of Amons and Howard (2004), off-the-shelf e-HRM systems dramatically reduce the amount of time spent on design and implementation stages, as the system has already been professionally designed and only needs to be integrated and deployed. However, the integra-
In-House vs. Off-the-Shelf e-HRM Applications
tion and deployment stages are overwhelmingly critical due to the fact that off-the-shelf systems need to be customized to properly bolt onto the company’s environment. It took case 1 a whole year to completely integrate and deploy Oracle ERP. On the other hand, as Amons and Howard (2004) suggest, design and development stages are lengthy using in-house developed software, especially with the second case in this research where the development team’s experience was limited. Looking at the development and implementation approaches, a general theme can be depicted. When implementing either e-HRM system (i.e. COTS or in-house developed), the organizations look for satisfying the HR’s department needs as the main user, and the needs of all other users such as employees and managers. However, some approaches might be emphasized more than others according to the type of e-HRM system. For example, when implementing inhouse developed e-HRM systems, the users are continuously involved, trained, and educated throughout the product life cycle to ensure that the software works for the best interest of its users, and to build a high level of trust in the IT department. This new experience for the IT department meant a new challenge and a new chance to prove its competence. Also, change management and communication management had to be perfected to provide the proper support for the development team and to make sure no conflict or confusion would cause the deterioration of trust between key players. As for project management, the second case believes that organizations who develop their e-HRM software in house must appoint a project manager from within the development team for two reasons. First, it is best to manage this first experience in house to avoid conflict of interest and to impose more control on decision making. Second, no one knows how to orchestrate the development team and leverage their capabilities better than a person who works closely with them.
Factors that affect off-the-shelf e-HRM implementation the most are business process reengineering, planning, and project management. The reason being is that HR process reengineering and project management are two essential elements handled professionally by the vendor within a well organized plan to implement off-the-shelf e-HRM software. In other words, vendors do not just deliver the software, but they rather manage the whole project through planning, reengineering, training, and implementation. From a realist point of view, one can also conclude that implementing off-the-shelf e-HRM system is a professional experience that must be handled by those who are most skilled and knowledgeable about the product itself, where this knowledge needs to be transferred to the acquiring party in an organized and well planned manner. A new predominant factor has been extracted during this research, which affected the implementation of e-HRM systems at both cases. Teamwork plays an essential role in implementing e-HRM systems as it triggers a synergistic effect between the teams involved in the transformation project. Cross-functional teams work cohesively in both cases to achieve planned targets and meet project gates. Flexibility, timeliness, and elimination of manual processes are advocates of efficiency when implementing both e-HRM systems. Increasing efficiency, therefore, is the most critical goal of transforming the HR function to e-HR. Efficiency can also be measured by the amount of HR staff reduced after the implementation. In both cases, less staff members could handle more tasks using the e-HRM systems. Providing employee and manager self services also came on the top of most effective e-HRM goals list. ESS and MSS are implemented within both e-HRM systems to provide easy access for employees to perform multiple e-HRM services swiftly and without interaction with the HR office. Third, achieving service excellence is considered crucial for organizations that develop e-HRM software in house. It is concluded through this research that when
769
In-House vs. Off-the-Shelf e-HRM Applications
an e-HRM system is developed in house, users’ expectations of results are at peak, due to the fact that this software is developed under the same organizational culture, by staff that commemorate this same culture. Last and perhaps least considered by both cases is the goal of improving the strategic focus. It is evident that due to the lack of market competition in Kuwait, companies spend less effort to improve their strategic orientation. Operational, relational, and transformational e-HRM activities are common between the two systems. However, these types of e-HRM activities become the most competitive advantage of in-house developed e-HRM systems. It is concluded through this research that when developing an e-HRM system that best fits the users’ needs, many of those needs become reality in an attempt to achieve maximum customer satisfaction and meet those high expectations. The outcome of this phenomenon is a system that harbors the best practices of off-the-shelf e-HRM systems with a competitive twist tailored to the last details of users’ needs. As for the e-HRM application characteristics, supporters of off-the-shelf systems (i.e. case 1) believe that corporations such as Oracle and SAP implement the markets best practices into their software solutions to gain a competitive advantage and increase market share. This leads to the creation of state-of-art enterprise solutions such as Oracle ERP. Also, case 1 agrees with Silverman (2006) that risk of implementing off-the-shelf eHRM systems is minimized by the fact that those systems are professionally provided and supported by experienced entities. On the other hand, software quality characteristics as depicted by ISO 9126 might not have been seriously considered by the amateur development team of the second case. This might have been caused by the fact that software development is not in conjunction with this organization’s profession. However, features like accuracy, suitability, security, reliability, usability, and maintainability become adhesive during the design and development phases.
770
Comparing the e-HRM outcomes of the two cases, an apparent theme emerges. Because of the amount of trust, confidence, and support shown to the in-house development team during the design, development, and implementation stages, employees’commitment may substantially increase. As the HR senior staff at case 2 phrases it: “supporting the hometown team eventually pays off.” Another reason for such a phenomenon is due to the amount of customer satisfaction implemented into the system, as most of the users’ needs are catered for to create a system that best fits those needs. Cost effectiveness, on the other hand, is most favored by those organizations that use off-the-shelf system. Traylor (2006) agrees that although the amount of money spent on acquiring, integrating, and supporting the enterprise off-the-shelf system is incredibly high, more cost effectiveness is achieved for both implementation and ongoing maintenance. Case 1 felt that using the renowned Oracle ERP system and the support package offered by the vendor avoided software obsolescence in the long run, as professional software providers keep on implementing the latest technologies into their provided services. Another outcome observed by the first case was changing the attitude of employees, which remains questionable. The change in attitude might have been partially affected by the open environment created during the e-HRM transformation, but it is believed that the organizational-level business process reengineering had the bigger effect on people’s attitude. Finally, usefulness and ease of use become natural effects of implementing both e-HRM systems. However, both benefits are most noticed by the users of in-house developed e-HRM systems because those systems are made to specifically fit the user’s needs. The Table 5 provides a summary of those findings and a clear comparison between the two systems with regards to the variables in the suggested framework.
In-House vs. Off-the-Shelf e-HRM Applications
Table 5. Off-the-shelf vs. in-house developed e-HRM systems Variables
Off-the-shelf
In-house
Implementation & Development Approaches User Involvement
- Moderate
- High - Users continuously involved throughout all stages
Business Process Reengineering
- Considered a major factor since HR processes need to be reengineered to conform to off-the-shelf standards
- Some processes reengineered to eliminate timeconsuming manual work
Planning & Vision
- More planned - Strategy and vision are professionally outlaid
- Informal plan and strategy
Training & Education
- Standardized - Delivered by vendor
- More emphasized to gain proper organizational support
Change Management
- Handled professionally with the help of the vendor - More rejection of the foreign system
- Handled by project manager to increase effectiveness - Less rejection and more support of the friendly system
Top Management Support
- Moderate support - High trust in professional services
- High level of support for the home-made product
Effective Communication
- Formal communication through awareness sessions
- Different informal communication channels for maximum effectiveness
Project Management
- Outsourced to experienced entity for more effectiveness
- In-house for more effectiveness
Teamwork
- Highly emphasized
- Highly emphasized
Increase Efficiency
- Time and cost reduction are guaranteed by vendor and their professional e-HRM
- Efficiency accomplished by utilizing resources, minimizing time and errors, and eliminating manual work
Provide Self Service
- ESS and MSS as provided by vendor
- ESS and MSS shaped to better fit users’ needs
Achieve Service Excellence
- Customer-oriented culture imposed by the system and not necessarily by organization
- Crucial due to the high level of users’ expectations
Improve Strategic Focus
- Strategic solutions offered professionally by the system
- Not critical
Operational Activities
- Standardized
- Tailored to the last details of users’ needs
Relational Activities
- Standardized
- Tailored to the last details of users’ needs
Transformational Activities
- Standardized
- Tailored to the last details of users’ needs
e-HRM Goals
e-HRM Activities
e-HRM Application Characteristics Functionality
- Very secure, accurate, and interoperable - Moderately suitable with a number of customizations
- High suitability - Very secure and accurate - Limited interoperability
Reliability
- High maturity, fault tolerance, and recoverability
- Immature but recoverable - Limited errors
Usability
- Easy to understand, learn, and operate - Normal attractiveness due to the standard graphical interface
- High usability levels due to the implementation of most users’ needs - Very attractive GUI
Efficiency
- High levels of timeliness and resource utilization
- High efficiency
Maintainability
- Professionally maintained by vendor through service and support agreements
- Easy to analyze, install, and test - Limited changeability
continues on following page
771
In-House vs. Off-the-Shelf e-HRM Applications
Table 5. conitnued Variables
Off-the-shelf
In-house
Portability
- Easily adopted in multiple environments
- Limited to Windows-based environments
Commitment
- Mildly imposed by the software’s culture
- Substantially high due to the amount of users’ needs implemented in the system
Competence
- Moderate increase
- Moderate increase
Cost Effectiveness
- Substantially high due to the professionalism of the system
- Normal
Congruence
- High
- High
Usefulness
- Normal
- Meets or exceeds expectations
Ease of Use
- Normal
- Meets or exceeds expectations
e-HRM Outcomes
HRIS Benefits
FUtUrE rEsEArcH
cONcLUsION
Further research can be done to determine the best practices for implementing e-HRM systems in the Kuwaiti context. Yeow and Sia (2007) found out, from a social constructivist point of view, that “best practices” are socially enacted knowledge that requires organizational power through politics, and discourse to influence different technological assumptions, expectations, and knowledge until this difference is resolved into what is logically called “best practices”. This means that politics and discourse in the context of Kuwaiti organizations can be studied to determine what the best practices are for implementing e-HRM systems. Another augmented approach to future research could be done after the fact that e-HR transformation in Kuwait has reached its maturity and the market is saturated with diverse organizations that use different e-HRM systems. Infusing eHRM technologies in Kuwait’s organizations is inevitable, however it could be long before any such technologies are enforced by cultural values and/or organizational needs due to the fact that legislations for electronic technologies have yet to be considered by Kuwaiti laws.
The comparison between off-the-shelf and inhouse developed e-HRM applications has been exploited in this research to find out the differences and similarities between development and implementation approaches, e-HRM goals, e-HRM activities, application types and characteristics, and the outcome of implementing both systems. Realism philosophy was used to extract facts about the suggested theoretical framework introduced in chapter three. Further, two cases were used as the best candidates for this research because of the different corporate cultures and markets they operated in, which lead to more interesting remarks. Looking back at the theoretical framework developed in chapter three, one can suggest a few minor adjustments. One of the elements that became irrelevant to this comparative study was the e-HRM application types. E-HRM systems are divided into four main types as described in the literature, namely administrative, employee/ manager productivity, strategic HCM, and business intelligence applications. Each of these sets handles a number of e-HRM activities (i.e. operational, relational, and transformational activities). However, these application types do not provide relevant information and, therefore, cannot be
772
In-House vs. Off-the-Shelf e-HRM Applications
used as a credible variable of the theoretical assumptions that are made about the differences and similarities between off-the-shelf and in-house developed e-HRM systems. Another adjustment would be to add “teamwork” as one of the major factors affecting the implementation and development of the e-HRM systems, as it was observed greatly by both cases. Generally, the decision of choosing between off-the-shelf and in-house developed e-HRM is dependent on factors such as the IT department capabilities, amount of time and resources spent on development and implementation phases, and the purpose of e-HR transformation. We saw that when companies plan a total business process reengineering, for example, the best practice would be to seek the assistance of professional services that can provide enterprise-level qualities. One of the most interesting findings in this research is that when e-HRM systems are developed in house, users’ expectations are high. Also, trust and confidence levels in the development team must be increased in order for the development to meet those high expectations. On the other hand, organizations who implement a certain offthe-shelf e-HRM system automatically develop a high level of trust in the professional expertise of the provider. This is due to the fact that software providers implement industry’s best practices and software quality (i.e. ISO 9126) into their solutions to gain a competitive advantage and increase market share. Teamwork and cross-functional team cooperation are on the top list of factors affecting the development and implementation of both systems, as they create a synergistic environment that leads to more efficiency and effectiveness. User involvement, effective communication, and change management are the most critical success factors for implementing in-house developed E-HRM software. On the other hand, planning and vision, project management, and business process reengineering are factors that affect the implementation of off-the-shelf e-HRM systems
the most due to the fact that these systems are handled professionally by the vendors. Increasing HR efficiency is a critical objective behind implementing both e-HRM systems. This goal can be mainly achieved through increasing the flexibility and timeliness of HR processes and eliminating a big portion of the manual processes. Reducing HR manpower is a consequent effect of the e-HR transformation that can also be counted as an efficiency measure. The fact that an e-HRM system is developed and supported internally by the IT department increases the level of pride and support by users toward the system and the team who develops it. This high level of support instantiates high congruence between employees and line managers, and improves the alignment between departmental and organizational goals. Furthermore, employees’ commitment increases as the level of confidence and support of the “home-made” e-HRM system increases. On the contrary, companies who implement off-the-shelf e-HRM systems increase cost effectiveness during the life cycle of the product due to the fact that those systems are “built to last”.
rEFErENcEs Al-Sehali. Saud, H. (2000). The Factors that Affect the Implementation of Enterprise Resource Planning (ERP) in the International Arab Gulf States and United States Companies with Special Emphasis on SAP Software. D.I.T. dissertation, University of Northern Iowa, Iowa. Amons, P., & Howard, D. (2004). Buy it, Build it, or Have it Built? Catalog Age, 21(3), 50. Ball, K. S. (2001). the use of human resource information systems: a survey. Personnel Review, 30(6), 677–693. doi:10.1108/EUM0000000005979 Beer, M., Spector, B., Lawrance, P., Mills, Q., & Walton, R. (1984). Managing Human Assets. New York: The Free Press.
773
In-House vs. Off-the-Shelf e-HRM Applications
Bingi, P., Sharma, M. K., & Godla, J. (1999). Critical issues affecting an erp implementation. Information Systems Management, 16(3), 7–14. doi:10.1201/1078/43197.16.3.19990601/31310.2 Buckhout, S., Frey, E., & Nemec, J. Jr. (1999). Making ERP succeed: turning fear into promise. IEEE Engineering Management . RE:view, 116–123. Carr, N. G. (2003). IT doesn’t matter. Harvard Business Review, 81(5), 41–49. Cassell, C., & Symon, G. (2004). Essential Guide to Qualitative Methods in Organizational Research. London: SAGE Publications. CedarCrestone 2006 HCM Survey: Workforce Technologies and Service Delivery Approaches – 9th Annual Edition. CedarCrestone 2007-2008 HR Systems Survey: HR Technologies, Service Delivery Approaches, and Metrics – 10th Annual Edition. Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: a comparison of two theoretical models . Management Science, 35(8), 982–1004. doi:10.1287/ mnsc.35.8.982 Exclusive IOMA Survey: What do Users Like (and Dislike) About Their HRIS? (2002, December). IOMA’s Payroll Manager’s Report, 02(12), 1. Florkowski, G. W., & Olivas-Luján, M. R. (2006). The diffusion of human resource information technology innovations in US and nonUS firms. Personnel Review, 35(6), 684–710. doi:10.1108/00483480610702737 Gueutal, H. G., & Stone, D. L. (2005). The Brave New World of eHR: Human Resources Management in the Digital Age. San Francisco: Jossey-Bass.
774
Hendrickson, A. R. (2003). Human resource information systems: backbone technology of contemporary human resources. Journal of Labor Research, 24(3), 381–394. doi:10.1007/s12122003-1002-5 HR Technology Trends to Watch in 2007 (2007). HR Focus, 84(1), 1. International Organization for Standardization. (ISO) (2001). ISO/IEC: 9126 Software Engineering – Product Quality – Part 1: Quality Model – 2001. Retrieved November 21, 2007 from http:// www.iso.org. Kovach, K. A., Hughe, A. A., Fagan, P., & Magitti, P. G. (2002). Administrative and strategic advantages of HRIS. Employment Relations Today, 29(2), 43–48. doi:10.1002/ert.10039 Lee, C., & Lee, H. 2001. Factors Affecting Enterprise Resource Planning Systems Implementation in a Higher Education Institution. Issues in Information Systems, 2(1), 207-212. Retrieved November 24, 2007 from http://www.iacis.org. Lengnick-Hall, M. L., & Mortiz, S. (2003). The impact of e-HR on the human resource management function. Journal of Labor Research, 24(3), 365–379. doi:10.1007/s12122-003-1001-6 Lepak, D. P., & Snell, S. A. (1998). Virtual HR: strategic human resource management in the 21st century. Human Resource Management Review, 8(3), 215–234. doi:10.1016/S10534822(98)90003-1 Madill, A., Jordan, & A., Shirley, C. (2000). Objectivity and reliability in qualitative analysis: realist, contextualist and radical constructionist epistemologies. The British Journal of Psychology, 91(1), 1–20. doi:10.1348/000712600161646 Nah, F. F., Lau, J. L., & Kuang, J. (2001). Critical factors for successful implementation of enterprise systems. Business Process Management Journal, 7(3), 285–296. doi:10.1108/14637150110392782
In-House vs. Off-the-Shelf e-HRM Applications
Ngai, E. W. T., & Wat, F. K. T. (2006). Human resource information systems: a review and empirical analysis. Personnel Review, 35(3), 297–314. doi:10.1108/00483480610656702
Wright, P. M., & Dyer, L. (2000). People in the E-business: New Challenges, New Solutions. Working paper 00-11. Center for Advanced Human Resource Studies, Cornell University.
Pearce, J. (2005). In-house or out-source? three key elements for it development. Franchising World, 37(4), 93–95.
Yeow, A., & Sia, S.K. (2007). Negotiating “Best Practices” in Package Software Implementation. Information and Organization. Retrieved from doi:10.1016/ j.infoandorg.2007.07.001.
Remus, U. (2007). Critical success factors for implementing enterprise portals: a comparison with erp implementations. Business Process Management Journal, 13(4), 538–552. doi:10.1108/14637150710763568 Ruël, H., Bandarouk, T., & Looise, J. K. (2004). E-HRM: innovation or irritation. an explorative empirical study in five large companies on Web-based HRM. Management Review, 15(3), 364–380. Ruta, C. D. (2005). The application of change management theory to the HR portal implementation in subsidiaries of multinational corporations. Human Resource Management, 44(1), 35–53. doi:10.1002/hrm.20039 Saunders, M., Lewis, P., & Thornhill, A. (2003). Research Methods for Business Students, 3rd edition. Essex: Pearson Education Ltd. Schramm, J. (2006). HR technology Competencies: New Roles for HR Professionals. HRMagazine, 51(4), special section, 1-10. Silverman, R. (2006). Buying better to buy better. Contract Management, 46(11), 8–12. Thaler-Carter, R. E. (1998). Do-it-yourself software. HRMagazine, (May): 22. Traylor, P. (2006). To buy or to build? That is the question. InfoWorld, 28(7), 18–23.
Yumiko, I. (2005). Selecting a software package: from procurement system to e-marketplace. The Business Review, Cambridge, 3(2), 341–347. Zizakovic, L. (2004). Buy or Build: Corporate Software Dilemma. Retrieved November 11, 2007, from http://www.insidus.com.
KEy tErMs AND DEFINItIONs BPR: Business Process Reengineering BSS: Benefit-related Self Service COTS: Commercial Off-The-Shelf E-HRM: Electronic Human Resources Management ERP: Enterprise Resource Planning ESS: Employee Self Services HCM: Human Capital Management HRIS: Human Resources Information Systems HRM: Human Resources Management ISO: International Organization for Standardization IVR: Integrated Voice Response MSS: Manager Self Services PSS: Pay-related Self Service ROI: Return on Investment SMR: Simple Management Reporting TBS: Total Benefits Statement TMSS: Time Management Self Service
Valverde, M., Ryan, G., & Soler, C. (2006). Distributing HRM responsibilities: a classification of organisations. Personnel Review, 35(6), 618–636. doi:10.1108/00483480610702692 This work was previously published in Handbook of Research on E-Transformation and Human Resources Management Technologies: Organizational Outcomes and Challenges, edited by Tanya Bondarouk, Huub Ruel, Karine Guiderdoni-Jourdain and Ewan Oiry, pp. 92-115, copyright 2009 by Information Science Reference (an imprint of IGI Global). 775
776
Chapter 3.14
Enterprise Resource Planning Under Open Source Software Ashley Davis University of Georgia, USA
AbstrAct
INtrODUctION
Open source software is becoming more prevalent in businesses today, and while still a relatively immature offering, open source enterprise resource planning (OS-ERP) systems are becoming more common. However, whether or not an OS-ERP package is the right software for a given organization is a little researched question. Building on the current real options thinking about platform acquisitions, this chapter proposes the five most critical factors to consider when evaluating an OS-ERP package. To adequately do this, a great deal of detail about the current offerings in OS-ERP software is presented, followed by a review of the real options theory and thinking behind using these factors to evaluate OSERP options. The international implications of OSERP are presented in the “Future Trends” section.
Open source software (OSS) is becoming a prominent part of the business infrastructure landscape. However, open source application software is still in its infancy. Success of open source enterprise resource planning (OS-ERP) systems will signify a coming of age of open source applications. There are many factors that will determine if OS-ERP systems are a valuable option for corporations, and thus whether OS-ERP systems will become as prominent as other open source offerings like Linux or JBOSS. This chapter will inform the reader of the current state of OS-ERP in the global context, and explain to potential adopters of OS-ERP the important factors to consider in evaluating an OSERP option. First, a common language for defining OS- ERP systems will be developed. Second, the current state of OS-ERP software will be explored. Third,
DOI: 10.4018/978-1-59904-531-3.ch004
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Enterprise Resource Planning Under Open Source Software
the business models of OS-ERP vendors will be exposed. Fourth, the advantages and disadvantages of customization of OS-ERP software will be explained. Fifth, the factors for valuing OS-ERP options using real options theory (Fichman, 2004) are defined. Finally, the global adoption of ERP software is explored.
bAcKGrOUND The first necessary requisite for understanding OSERP systems is to define a common language for talking about OS-ERP applications. This includes defining exactly what an OS-ERP application entails and whether the software meets the definition of open source software. There is much ambiguity in the popular press about what is and is not OSS; this is only confounded when ERP systems claim to be open source. To clarify these issues, the next section will explain historical context of OSS. Secondly, open source licensing issues are explained. Then, the issue of open source ERP functionality is addressed. Lastly some examples of OS-ERP software are provided.
History of Open source software Open source software has a rich history, from an initial chaotic beginning out of a hacker culture (Raymond, 1999) to its current manifestation as a foundation for profit-seeking corporations such as JBOSS (Watson, Wynn, & Boudreau, 2005), Compiere Inc., and Red-Hat Linux. As open source has evolved, the definition of open source software has changed and the open source ecosystem has grown. Previously, open source software was defined in terms of two characteristics: (1) licenses that give programmers the ability to view, change, enhance open source code, and distribute the source code without discrimination (Feller & Fitzgerald, 2000; Open Source Initiative, 2005); and (2) the software is free of cost. While this definition was sufficient, for pure open source
initiatives of the past, it does not adequately cover all that “open source” includes today. This is in contrast to proprietary software where the license generally does not allow for distribution of the source code and is not free of cost. Evolution and commercialization of OSS have led to many products being labeled “open source” that are not free of cost. As well, proprietary software (software controlled and offered by vendors for a price) that give access to the code are termed open source, while there is no licensing to support the open source model of software development. Proprietary software that allows access to the source code still leaves the control of the source code, what is included in the source code in future versions, in the hands of the vendor, who may be less accepting of contributions of code than an open source community. However, even under the most stringent of open source (OS) definitions, there have been many great open source successes. For example, MySQL is an open source database server that has grown phenomenally since its inception in 1995. MySQL AB is the company that supports the MySQL product. This product is free and the source code is available to everyone under the GNU General Public License (GPL). Licensing will be discussed in more detail in the next section. MySQL is currently backed by several venture capitalists and is without debt (MySQL, 2007). There were over 12 million downloads of MySQL in 2006, and 2,500 new customers started using MySQL to power Web sites, critical applications, packaged software, and telecommunications infrastructure. MySQL is just one example of the success of OS software in the infrastructure space. Other examples in infrastructure offerings include JBOSS and Linux. In terms of business applications, there are fewer major success stories. SugarCRM, however, is one of the most successful open source business applications. SugarCRM is a customer relationship management software package (packaged software is that which offers a “set of functionality” in a
777
Enterprise Resource Planning Under Open Source Software
complete state that does not require programming) that is available under a “custom” open source license.1 SugarCRM has over a million downloads and about 1,000 customers (SugarCRM, 2007). ERP systems—large packaged integrated business applications that include intensive functionality in the following areas: marketing and sales, accounting and finance, production and materials management, and human resources2—are relatively less prolific than these major success stories in the open source space. However, as the rest of this chapter outlines, there are several open source ERP offerings becoming available. It waits to be determined which of these applications will become widely used in businesses.
1.
2.
Open source Licenses Licensing plays a big role in definitions of OSS. As of 2006, the Open Source Initiative (OSI) recognizes nearly 60 different open source licenses (Open Source Initiative, 2006). Given this number of licenses, and that OSI is not the only organization that provides accreditation of licenses,3 it is understandable that the definition of open source is not stable. Many companies have turned to a dual-licensing strategy, making their code available under general public license (GPL) but also offering a commercial license (Rist, 2005). Given this large number of licensing options and business models, it is understandable that there is confusion as to what software really is open source software. Open source software still must allow access to the source code, as mentioned above. So, OSERP systems are defined to allow access to the code. However, under what terms this is available, including for what cost, is a question. For the purposes of understanding OS-ERP software, one must assume that the cost is not necessarily free. For the OS-ERP packages in this chapter, three general license types are utilized:
778
3.
Apache License Version 2.0: “Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, nocharge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.” As with all licenses, it is recommended that a full review of the license is conducted before working with any open source software (Apache, 2007). Mozilla Public License 1.1: This license allows for access to the source code for review and modification. This is a copyleft license, meaning that all modifications involving the original source code are sent back to the originator of the software. As with all licenses, it is recommended that a full review of the license is conducted before working with any open source software (Mozilla, 2007). GNU GPL:“GNU is a recursive acronym for ‘GNU’s Not Unix’; it is pronounced guh-noo, approximately like canoe…the GNU General Public License is intended to guarantee your freedom to share and change free software—to make sure the software is free for all its users.” Users of software under the GNU GPL can use, modify, improve, and redistribute such software. As with all licenses, it is recommended that a full review of the license is conducted before working with any open source software (GNU, 2007a, 2007b).
However, some software companies create their own licenses with their own implications for what “open source” means to their software. In the OS-ERP space, examples include: 1.
OpenMfg:“…the license OpenMFG uses allows companies to view and modify source
Enterprise Resource Planning Under Open Source Software
2.
3.
code, as well as make contributions to the source code” (Caton, 2006). However, this license does not allow for distribution of the source code. So, by definition, this is not what is generally meant by “open source” software. As with all licenses, it is recommended that a full review of the license is conducted before working with any software. avERP: This license includes no license fees, no charge for updates, no fees charged for own programming, no obligation to pay or purchase anything at anytime, no obligation for use of services, and it is possible to alter all program modules if wished and includes source code (HK-Software, 2007). It does not appear that redistribution of the software is allowed. As with all licenses, it is recommended that a full review of the license is conducted before working with any software. OpenPro: This is software built on open source technologies that allows access to the source code. However, there is a license fee involved, and the right to distribute the software is not explained on the Web site (OpenPro, 2007). As with all licenses, it is
recommended that a full review of the license is conducted before working with any open software. Table 1 outlines current OS-ERP offerings, licensing, and Web sites where more can be learned about the software. The popular press has referred to all of the software listed in Table 1 as OS-ERP, even those with cost. ERP systems are the backbone of an organization, and the cost of the initial license is only part of the total cost of ownership (TCO) of an ERP system or any software package. Organizations cannot rely on vendors and analysts to provide accurate TCO estimates. One practitioner notes, “…no matter how honest they try to be, neither vendor nor analyst can ever fill in all the variables of the TCO formula…The unique combination of resources, both machine and human, at work within your organization is something only you can fully understand” (McAllister, 2005). One estimate of the costs that make up TCO includes hardware and software, technical services, planning and process management, finance and administration, training, user support, peer support, and application development (DiMaio, 2004).
Table 1. Open source ERP licensing Open Source Offering
License
Web Sites
Compiere
GNU GPL
http://www.compiere.org/
OpenMFG
Custom
http://www.openmfg.com/
OfBiz
Apache License Version 2.0
http://ofbiz.apache.org/
Tiny ERP
GNU GPL
http://sourceforge.net/projects/tinyerp
OpenPro
Undisclosed Cost Structure
http://www.openpro.com/
WebERP
GNU
http://www.weberp.org/
ERP5
GPL
http://www.erp5.org/
Adempiere
GNU GPL
http://www.adempiere.com/
avERP
Custom
http://www.hk-software.net/h-k.de/content/doc_138099-3-5-0.php
Fisterra
GNU GPL
http://community.igalia.com/twiki/bin/view/Fisterra
OpenBravo
Mozilla Public License 1.1
http://www.openbravo.com/
GNUe
GNU GPL
http://www.gnuenterprise.org/
Value ERP
GNU
http://www.valueerp.com/
779
Enterprise Resource Planning Under Open Source Software
However, migration costs, testing, system integration, and so forth must be included in determining the total cost of switching to any software package. For a breakdown of some of the costs to consider, please see the cost comparison provided by Nolan (2005). The comparison of costs between proprietary ERP and OS-ERP systems is not readily known, as “the lack of large-scale implementations (of OS-ERP) eliminate a direct comparison with enterprise solutions, such as mySAP Business Suite or Oracle E-Business suite” (Prentice, 2006). However, even getting an accurate TCO is not adequate when deciding whether to use open source software (DiMaio, 2004, 2007). For OS-ERP software, the decision is more complex because this decision crosses the bounds of the entire organization. Decision makers involved in evaluating ERP systems are aware that even for proprietary systems, the cost estimates vary wildly (Cliff, 2003), and thus when considering OS-ERP solutions, TCO (as complex as it is) will be but one evaluative criterion. ERP evaluative criteria are often handled by complex request for proposal processes involving outside experts. Beyond this expertise, this chapter proposes factors for valuing OS-ERP options (see the “Real Options Value of OS-ERP” section) that should be considered in addition to these complex TCO estimates. As well, the business model (see the “Open Source Vendor Models” section) behind open source offerings will explain why cost is not as much of an issue in the definition of OS-ERP systems. However, a pure definition of OS-ERP would not include AvERP, OpenPro, or OpenMFG because, although all allow access to modify and change the code, these packages either charge a license fee or do not allow for re-distribution of the software (as mentioned above).
Functionality of Os-ErP For those interested in understanding the OS-ERP phenomena, a solid foundation of understanding
780
the functionality of each ERP application is also important. The maturity of the proprietary ERP market has allowed convergence as to what constitutes ERP functionality. For most, an acceptable general definition of ERP systems is as follows: ERP systems are complex, modular software integrated across the company with capabilities for most functions in the organization. As mentioned in the “History of Open Source” section, ERP systems include functionality in the following areas: marketing and sales, accounting and finance, production and materials management, and human resources. Many times supply chain management and customer relationship management are also included in ERP packages. It is beyond the scope of this chapter to list and define whether each open source system that claims to be ERP actually contains ERP functionality. For practitioners looking to acquire OS-ERP software and academicians looking to do OS-ERP research, a thorough assessment of whether a particular system contains full ERP functionality is required. This is more of an issue in OS-ERP systems since most of these systems are fairly new and a consensus has not been achieved as to what is actually an OS-ERP system. In the OS-ERP market, many of the claims of what functionality constitutes an OS-ERP system is blurry: One characteristic of open source is that different projects define their category’s feature sets in different ways. This is especially true of ERP packages. Linux-Kontor, for example, defines ERP without accounting, focusing instead on customer management, order entry, invoicing, and inventory. TUTOS, on the other hand, calls itself ERP but more closely resembles a groupware suite. Clearly, some research is needed to make sure you’re really getting what you expect in this category. (Rist, 2005) Another concern with open source software is that the offerings are changing rapidly, so for up-to-date and complete information, the Web
Enterprise Resource Planning Under Open Source Software
Table 2. OS-ERP platform and functionality Software
Platform
Functionality
Compiere
Independent
Quote to Cash, Requisitions to Pay, Customer Relations, Partner Relations, Supply Chain, Performance Analysis, Web Store/Self Service
OpenMFG
Linux, Apple Mac OS X, Windows
Manufacturing, Materials Management, Supply Chain (Sales Order, Purchase Order, CRM), Accounting
OfBiz
Linux, Berkeley Software Distribution
Supply Chain Management, E-Commerce, Manufacturing Resource Planning, Customer Relationship Management, Warehouse Management, Accounting
Tiny ERP
Independent
Finance and Accounting, CRM, Production, Project Management, Purchasing, Sales Management, Human Resources
OpenPro
Linux, Windows
Financials, Supply Chain, Retail and Manufacturing, CRM and E-Commerce, Warehousing, EDI
WebERP
Independent
Order Entry, Accounts Receivable, Inventory, Purchasing, Accounts Payable, Bank, General Ledger
ERP5
Linux, Windows (coming soon)
Customer Relationship Management, Production Management, Supply Chain Management, Warehouse Management, Accounting, Human Resources, E-Commerce
Adempiere
Independent or Linux on its site
Point of Sale, Supply Chain Management, Customer Relationship Management, Financial Performance Analysis, Integrated Web Store
avERP
Linux
Sales, Manufacturing, Purchasing, Human Resource Management, Inventory Control, CAD Management, Master Data Management, Business Analyses
Fisterra
GNU/Linux
Point of Sale, Other Business Processes Specific to Automotive Glass Repair Businesses
OpenBravo
Linux, Windows
Procurement, Warehouse, Project Management, Manufacturing, Sales and Financial Processes, Customer Relationship Management, Business Intelligence
GNUe
Linux, Microsoft Windows
Human Resources, Accounting, Customer Relationship Management, Project Management, Supply Chain, E-Commerce
Value ERP (Proj ect Dot ERP) http://www.valueerp. net/catalog/index.php
Linux, Solaris, Berkeley Software Distribution, Microsoft Windows
General Ledger, Payable and Receivable, Invoicing, Purchase and Receiving, Time Sheet Management for HR, Inventory Management and Manufacturing
sites listed in Table 1 are the best sources for information about the offerings and the platforms on which the software is built/supports. However, just to provide a flavor of the offerings, Table 2 provides a cursory look at the functionality being offered by these software products (OS-ERP) at the time of this work. Again, since functionality is hard to pin down, (for example, if a package only lists “HR” as its offering, does this mean benefits administration is included?), those interested in this software should contact the company directly for the most accurate and up-to-date information.
Os-ErP Examples There are many different flavors of OS-ERP packages, mostly because each package grew from a particular need (as OSS offerings often do). For example, Fisterra grew from a custom application built for an automotive glass replacement and repair company (Fisterra, 2007). Versions of the Fisterra product specifically intended for this industry are now called Fisterra Garage. This new name allows distinction from Fisterra 2—a generic ERP released in 2004. In this case, proprietary ERP definitions apply, in that the package has what most would define as full ERP functionality. However this is not always the case (as noted above).
781
Enterprise Resource Planning Under Open Source Software
The industry origins of the ERP package are important, as many times these are industry solutions that work best for the original industry. This is akin to SAP providing excellent manufacturing functionality because of its origins in the manufacturing industry. Another example of OS-ERP starting in a specific industry is OfBiz, which started with an emphasis on the retail industry (Adamson, 2004).
AssEss tHE Os-ErP LANDscAPE In the OSS arena, infrastructure products like Linux, Perl scripting language, and My SQL database management systems have been very successful and thus very prominent. Given that the developers of open source software were many times the users of open source software, this success is predictable. Less predictable is the success of open source software when used to develop applications. Most of the time, developers are developing the application for users that are very different from themselves. The users of applications are not technical, and their requirements are very different from those of technical users—thus the criticism that open source software is not “user friendly” or has a usability problem (Nichols & Twidale, 2003). This issue is important to OS-ERP, as possibly the entire organization will interface with the OS-ERP application, rather than only IT people directly working with infrastructure products. OS-ERP is a quickly changing environment. For example, Compiere, one of the most wellknown open source ERP companies, announced Andre Boisvert as chairman of the board and chief business development officer in May 2006. Jorg Janke (founder of Compiere) noted that Boisvert’s success in “applying the open source business model to markets traditionally monopolized by proprietary software vendors is definitely a plus” (Compiere, 2006a). Then, in July 2006, Compiere added Larry Augustin to its board of directors. Mr.
782
Augustin is founder of VA Linux (VA Software) and launched SourceForge.net, the largest open source development site on the Internet (Compiere, 2006b). These new additions to the board indicate an interest in growth by Compiere. However, Compiere faces many challenges, for example a group of developers decided to “fork”4 and created Adempiere ERP, CRM, & SCM Bazaar (commonly referred to as Adempiere). Forking (Raymond, 1999) is not unusual in the open source community. OFBiz also has a forked version called Sequoia. Sequoia in February 2006 was renamed Opentaps, meaning Open Source Enterprise Application Suite. The issue of forking is not new in open source software and thus is an issue in open source ERP applications as well. As mentioned earlier, forking usually occurs when a group of developers decides that the current direction of the project is not as they would like; take for example what happened with Compiere’s fork Adempiere. Adempiere was started in September 2006 after “a long running disagreement between Compiere and the community that formed around the project” (Adempiere, 2006). The developers behind Adempiere felt that Compiere was focusing on “commercial/lock-in” aspects of the project, and decided to create a new version that could focus more on the community and the “sharing/enriching” aspects of the project. Jorg Janke refutes these allegations, nonetheless forking has occurred.
Geographic Origins, Age, and Networks of Os-ErP Projects At this time, there is very little research (academic or practitioner) into the global status of open source ERP. It is clear that there are many vendors out there offering open source packages in multiple languages—indicating an international audience. Also, the packages analyzed for this research have geographically dispersed locations, if any location is listed at all. Many times in an open source ERP project, the Web serves as the primary location for
Enterprise Resource Planning Under Open Source Software
Table 3. Geographic origins of OS-ERP projects Open Source Company
Project Founded
are often offering consulting services, implementation services, maintenance, or training. The number of customers is also important in assessing the maturity of the OS-ERP. As can be seen from Table 4, Compiere currently has more the three times the number of customers as any other OS-ERP application. However, it is hard to assess the number of customers because many of these open source projects have no central repository for keeping track of this information. Although partners that provide services for OS-ERP systems have some count of how many customers they currently have, they do not necessarily share this information. For assessing the value of OS-ERP software, the number of partners, customers, and developers provides a strong network and thus should provide greater value. More will be discussed on this topic in the “Real Options Value of OS-ERP” section. Another important consideration in assessing the landscape of OS-ERP offerings is that most of these packages are targeted to specific audiences and may only be proven for specific sizes of organizations. Table 5 outlines the size of organization targeted by OS-ERP packages according to their Web sites. From Table 5, it can be gleaned that most OSERP projects are targeting small and medium enterprises, with some targeting large organizations. With this target audience, the question of
Where
Compiere Inc.
2001
Santa Clara, CA
OpenMFG
2001
Norfolk, VA
OfBiz
2001 (2003 migrated to java.net)
Web
Tiny ERP
2002
Belgium
OpenPro
1998 (1999 first release)
Fountain Valley, CA
WebERP
2001 (2003 first release)
Web
ERP5
2002
Web
Adempiere
2006
Web
avERP
1998 (2001 first installation)
Bayreauth, Germany
Fisterra
2002 (2003 first release)
Web
Openbravo
2001
Pamplona, Spain
GNUe
2003 (first release)
Web
Value ERP
2005
Sparta, NJ
the project, and there is no real geographic orientation associated with the project, as is indicated by “Web” in Table 3. Most of the open source ERP companies are relatively new (see Table 3), though many have formidable numbers of partners (see Table 4). Partners are important in sustaining an open source project. Partners have some monetary interest in sustaining a viable open source offering, as they
Table 4. OS-ERP network (partners, customer, and developers) Customers
Partners
Developers
Data Collected From
OpenBravo
15
*
*
Serrano & Sarriegi, 2006
Compiere Inc.
240
70
50
Ferguson, 2006
ERP5
10
8
*
Serrano & Sarriegi, 2006; www.erp5.org
avERP
60
*
*
http://de.wikipedia.org/wiki/AvERP
OpenMfg
35
20
100
Ferguson, 2006; www.openmfg.com
OFBiz
59
21
*
www.ofbiz.org
TinyERP
*
34
*
http://tinyerp.com
OpenPro
*
Over 75
*
www.openpro.com
WebERP
*
9
*
www.weberp.org
783
Enterprise Resource Planning Under Open Source Software
multinational implementation success with OSERP systems is raised. At this time, the limited and varying functionality (see Table 2), and lack of large-scale implementations precludes direct comparison with solutions like mySAP Business suite or Oracle E-Business suite (Prentice, 2006).
Venture capital Funding and Project Activity in Oss and Os-ErP The open source market is heavily funded by venture capital firms, Guth and Clark (2005) in the Wall Street Journal estimated that $290 million was invested in open source in 2004. However, open source ERP companies are lagging in maturity and thus their numbers for VC funding are much lower. Currently, two open source ERP companies are funded by venture capitalists: OpenBravo (Serrano & Sarriegi, 2006) and Compiere (Hoover, 2006). Both companies are funded with $6 million. Given the importance of the community to open source project value, SourceForge.com tracks the most active projects daily. As of November 5, 2006, three OS-ERP projects fall in the top 15 most active: Openbravo ERP at #7; Adempiere Table 5. OS-ERP Software
Size of Organization Targeted
Compiere
Small, Medium
OpenMFG
Small, Medium Manufacturers
OfBiz
*
Tiny ERP
*
OpenPro
Small, Medium, Large (1-1,000+ users)
WebERP
*
ERP5
Small, Medium, Large
Adempiere
Small, Medium
avERP
Small, Medium (1-300 employees)
Fisterra
Small, Medium
OpenBravo
Small, Medium
GNUe
Small, Medium, Large
Value ERP
Small, Medium, Large
784
ERP, CRM, & SCM Bazaar at #11; and Compiere at #14. There is clearly activity happening at the site that hosts these projects (http://sourceforge. net/top/mostactive.php?type=week). This is a sign of strength for these OS-ERP projects. Also, in trying to assess the value of OS-ERP options, VC funding and project activity would certainly positively impact the prospect for network dominance of the OS-ERP. More will be discussed on this topic in the “Real Options Value of OS-ERP” section of this chapter.
OPEN sOUrcE VENDOr bUsINEss MODELs Open source vendors have many different business models. When working with a piece of open source software, it is important to understand which type of organization/business model is in use by any particular vendor. Table 4 defines the business models that have been used to describe what is happening with open source firms in general. The most simple understanding of these models can be drawn from Bonaccorsi, Giannangeli, and Rossi (2006) with Pure OS and Hybrid business models. The Pure OS model includes firms that only offer OS products or OS solutions. The Hybrid business model includes all the others that play and profit in the OS space. Hybrid business models mix products, types of licenses, and sources of revenues. As academicians and practitioners, it is important to understand what type of vendor is included in any OS project undertaken. As can be seen from the plethora of business model types, the objectives and goals of each type of business will differ and may impact the quality of service or the types of products that the vendor offers. Organizations should consider the business model of any OS vendor with whom they engage in business, as the model and the viability of the vendor will impact the value of the OS-ERP option. Proprietary vendors do not have this range of business models, so this is not an issue with
Enterprise Resource Planning Under Open Source Software
Table 6. Open source business models Business Models
Definition
Source
FOSS
Value-added service-enabling—sell support services and complementary software products Loss-leader/market creating—goal is enlarging the market for alternative closed source products and services
Fitzgerald, 2006
OSS 2.0
Includes FOSS, adds the following: Dual product/licensing—can download for free, small percentage of downloads purchase a commercial license Cost reduction—proprietary companies offer OS as part of their solution Leveraging the open source brand—as government agencies mandate that open source be a priority option for software solutions, the open source brand becomes more valuable
Fitzgerald, 2006
Software Producer GPL Model
Entirely open source offerings
Krishnamurthy, 2005
Software Producer Non-GPL Model
Incorporate source code into a larger code base and create a new product or take an entire open source product and bundle it with existing products
Krishnamurthy, 2005
The Distributor
Derives benefits by providing it on CD, providing support services to enterprise customers, upgrade services only to open source product
Krishnamurthy, 2005
Third-Party Software Provider
Provide services for all types of products
Krishnamurthy, 2005
Pure OS Business Model
Firms that offer only OS products and OS solutions
Bonaccorsi et al., 2006
Hybrid Business Model
“They (hybrids) distribute OS products but also develop customized solutions using OS software, for which they presumably offer installation, support, and maintenance. The large majority also actively supply complementary services such as consulting, training, and to a lesser extent research and development.” (p. 1090)
Bonaccorsi et al., 2006
Professional Open Source (POS)
“POS combines the benefits of open source (OS) with the development of methodologies, support, and accountability expected from enterprise software vendors.” (p. 329) Three features of POS: (1) separation of product adoption and purchase, (2) seed and harvest marketing strategy, and (3) dual growth of firm and ecosystem
Watson et al., 2005
Proprietary
Inability to view and modify the source code (regardless of price)
Watson, Boudreau, York, Greiner, & Wynn, 2006
Open Community
Volunteers develop and support code with limited or no commercial interest
Watson et al., 2006
Corporate Distribution
Organizations create value by providing complementary services such as: interacting with the community for support, supporting the software for customers, identifying appropriate OSS for customers
Watson et al., 2006
Sponsored OSS
Corporations act as primary sponsors of OSS projects. These corporations provide funding and/or developers to the project.
Watson et al., 2006
Second-Generation OSS (OSSg2)
This is a combination of corporate distribution and sponsored OSS (the OSSg2 company provides complementary services around the products, but OSSg2 companies also provide the majority of the resources needed to create and maintain their products). These types of corporations strive to provide accountability, talented programmers, and a healthy ecosystem.
Watson et al., 2006
proprietary ERP systems. OS-ERP option value and its relation to business model will be discussed
more in the “Real Options Value of OS-ERP” section of this chapter.
785
Enterprise Resource Planning Under Open Source Software
cUstOMIZAtION OF Os-ErP Proprietary ERP system customization has been explored extensively in the ERP literature (Gattiker & Goodhue, 2004; Levin, Mateyaschuk, & Stein, 1998; Nah & Zuckweiler, 2003; Soh & Sia, 2004, 2005; Soh, Siew, & Tay-Yap, 2000). OSERP applications are different from proprietary ERP applications in that they are closer to custom applications than to packaged software. For example, packaged software has limitations in terms of customization because the source code is not available. The issues with packaged software, as described by Gross and Ginzberg (1984) of “uncertainty about package modification time and cost, vender viability, and the ability of the package to meet the user needs,” will apply differently to open source applications. Open source software has the code available, so the user is free to change (and many times is encouraged to change) the code as needed. Therefore, the issue of uncertainty about package modification time and cost depends solely on the skill of the programmers, not on some constraint that the modification may not be allowed by the proprietary software vendor. However, the ability to change source code raises a new set of problems. The efficiencies that we reap from packaged software (standards, easy maintenance, and upgrades) will not necessarily be available with open source software. Thus customization of OS-ERP is a double-edged sword. Although it is beneficial to adopters from a flexibility of the software perspective, maintenance, upgrades, and version control become new issues with which the adopting organization must contend. Another issue that comes with OS-ERP is that the quality of the code will not always be similar to the acceptable quality of code required by the implementing organization (Spinellis, 2006). With proprietary code this is not as much of an issue, as you do not usually have access to all that code, and maintenance is definitely the duty of the vendor. In the OSS industry, it is many times expected that the skill for maintaining the code resides in
786
the adopter’s organization. Code of questionable quality will be harder to modify and maintain.
rEAL OPtIONs VALUE OF Os-ErP Much research has been done on real options theory as a way of evaluating platform change decisions (Fichman, 2004; Fichman, Keil, & Tiwana, 2005; McGrath, 1997; McGrath & MacMillan, 2000). In fact, a switch from SAP R/2 to SAP R/3 was considered a platform decision and evaluated in terms of real options (Taudes, 2000). These models and methods are useful, but seem to be missing some key variables that would influence the option value of OS-ERP. Real options theory rests on the belief that limited commitment can create future decision rights. Real options theory has a rich history borne from the finance and economics literature, and has been applied to technology in a variety of ways. Real options theory, in terms of technology positioning projects, proposes that technologies are desirable if they provide opportunities for future rent creation. These investments require less commitment than if a full plan was created that did not allow for quitting midstream. Beyond the specific real options literature that supports the logic of committing to software with the intent of exercising some future option, the project management literature on management information systems (MIS) also supports this notion. In the case of ERP systems and large systems in general, organizations implementing technology have decided that a “staged” or incremental approach is preferable to “big bang” implementations. The staged approach allows for less radical change introduction in the organization. A logical extension of this trend in project implementation approaches to valuing technologies in the selection decision seems clear. If companies can commit to a technology, and yet have the option to proceed, as well as options for future rents, the technology is more valuable to the organization.
Enterprise Resource Planning Under Open Source Software
ERP systems—and more specifically, OS-ERP systems—tend to allow for future options. Option valuation will be performed multiple times throughout the life of a project. This is not a one-time pre-implementation exercise. This is important: as implementation techniques have changed, so have the options that are afforded by technology positioning investments. For example, a company may look at the option value of implementing specific modules of an ERP system. Future options may be to continue to implement other modules of this system, to implement an integrated best-of-breed addition, to use these modules as a stand-alone part of their system, to integrate this module back to existing legacy systems, and so forth. This approach would be in congruence with current project management practices, where the project is analyzed at different steps to ensure the project is proceeding satisfactorily. Likewise, the option value of investment should be analyzed periodically to determine how to proceed with the option5—the emphasis being here that option value does change with time and thus should be examined over time. Most of the studies of real options in a technology context have used a rigorous financeinfluenced quantitative methods for evaluating the value of real options (Benaroch, 2000, 2002;
Benaroch & Kauffman, 1999; Clemons & Gu, 2003; Santos, 1991; Taudes, 1998, 2000). Table 7 provides important background information about these studies. For valuing the option of OS-ERP, I propose thinking more in line with the thinking behind McGrath (1997), McGrath and MacMillian (2000), Chen and Chen (2005), and Fichman (2004). These studies apply real options thinking to a qualitative options valuation method. For example, McGrath (1997) provides factors that influence the option value of a technology positioning option. This framework was expanded with explicit items to characterize each factor in later work (McGrath & MacMillan, 2000). This theory of real options for technology positioning focuses on the factors that are necessary for specific domains: new product development and R&D type research. Although there are many similarities in R&D investments and IT platform investments, there are also many differences (Fichman, 2004). These differences are explained by drawing from four complementary perspectives: technology strategy, organizational learning, innovation bandwagons, and technology adaptation to develop 12 factors identified as antecedents of option value in IT platform investments (Fich-
Table 7. Finance-based studies of IT platform decisions as real options Authors
Major Theme
Context
Case or Conceptual
Clemons & Gu, 2003
Strategic options in IT infrastructure
Credit card rates for Capital One (industry level)
Case
Benaroch, 2002
Manage IT investment risk that that helps to choose which options to deliberately embed in an investment
Internet sales channel
Case
Benaroch & Kauffman, 2000
Investment timing
POS debit market
Case
Taudes, 2000
Evaluates ERP platform change
SAP R/2 to SAP R/3
Case
Benaroch & Kauffman, 1999
Investment timing
POS debit market
Case
Taudes, 1998
Evaluates software growth options
EDI growth option
Case
Dos Santos, 1991
Applies real options theory to IT investments using two stages, where the first stage creates the option for the second stage
None
Conceptual
787
Enterprise Resource Planning Under Open Source Software
man, 2004). As the focus of this book chapter is on option value of OS-ERP systems, the original factors proposed to explain option value of platform decisions (Fichman, 2004) are pared down to the most influential in this context, and the short definition is contextualized to OS-ERP option valuation. Then, several factors (see Table 8) are added based on the previous discussion of OS-ERP systems. According to real options theory, options are valued more highly if limited commitment is required to take advantage of the option. In terms of OS-ERP options, less commitment will be required if the organization already possesses the resources to customize, maintain, and upgrade the OS-ERP software. Future rents may be created by customizing and upgrading the OS-ERP software. Less resources are required if the quality of the source code is high. As well, a fit between the business model of the OS vendor and the type of services and future solutions the organization might need would require less commitment in terms of gaining acceptance of the vendor. Future rents will be created if the business model of the vendor proves to be viable and the vendor is able to survive. This phenomenon is not unlike the dot. com era where the business model was as important as the technology. As was discussed earlier, these factors cannot be ignored in evaluation of an OS-ERP software option.
FUtUrE trENDs Global Adoption of Open source ErP International research on open source firms is scarce (Bonaccorsi et al., 2006). Research in the area of OS-ERP packages is scarcer. Even scarcer still is international OS-ERP research. For that reason, some generalizations about the current state of “adoption of open source” research are attempted based on what is happening in open source software in general. As well, several open source ERP participants were asked their opinions of the international open source ERP market. There is clearly international activity in the open source arena, as is evidenced by a study performed by Lancashire (2001) where he showed that contributors to open source software development were shifting to an international origin. However, even this study had a very small sample (two projects) and there are questions as to the generalizability of these findings. More recently, evidence of the activity internationally in open source projects comes from the samples taken for some open source research. Bonaccorsi et al. (2006) performed a survey of 175 partners and system administrators of Italian open source6 firms. Their sample was drawn using a snowballing technique where initial contacts refer other
Table 8. Factors for valuing OS-ERP options Factor
Short Definition
Susceptibility to network externalities (Fichman, 2004)
The extent to which a technology increases in value to individual adopters with the size of the adoption network. In the case of OS-ERP, this is particularly relevant as future versions of the software depend on the adoption network, the partners, and developers.
Prospects for network dominance of the technology instance (Fichman, 2004)
The extent to which the technology instance being adopted is likely to achieve a dominant position relative to competing technology instances within the same class. In the case of OS-ERP, a dominant position may be achieved by a large network of adoption, partners, and developers, as well as venture capitalists.
Customization
The extent to which an organization possesses the resources to customize, maintain, and upgrade OS-ERP software.
Quality of Source Code
The extent to which the source code associated with an OS-ERP is of high quality.
Business Model of OS Vendor
The extent to which the business model of the OS vendor is acceptable and viable to the organization.
788
Enterprise Resource Planning Under Open Source Software
potential participants. This sample hints at the breadth of OSS in European countries. Currently, several open source ERP participants based in the United States see potential for international growth. One open source ERP company estimates that about 15% of their business is from outside the United States and the CEO sees “enormous growth opportunity internationally.” Another participant involved with an open source ERP package based in the United States reports that in 2005 about 80% of his income came from international sources. Then there are open source vendors that are based all over the world. The small sample of ERP vendors studied for this research shows that the base country for these organizations varies greatly (see Table 3). There are many reasons that open source ERP is being developed internationally and development services sold internationally. The requirements of ERP systems vary from country to country. For example, China has different requirements of ERP packages than the U.S. (Liu & Steenstrup, 2004) . So, where certain packages dominate the U.S. ERP market, they are less diffused in the Asian market. Similar considerations will occur with open source ERP applications internationally. There is more room for competition from open source ERP packages in countries where the requirements are not as mainstream. This observation helps explain the proliferation of open source ERP developments around the world. As for the services of U.S. developers being sold around the world, some U.S.-based open source ERP participants feel that the international opportunity is greater because the dollar is weaker, and so development by U.S. programmers is more affordable to those outside the U.S. The economics of this assumption also work in the reverse. There are many instances of international partners to U.S.-based ERP projects that are called on to develop customizations for U.S.-based users of the software because this international labor is more affordable.
cONcLUsION This chapter outlines the current state of several OS-ERP packages (see Table 1) along the dimensions of licensing, geographic origins, number of customers, number of partners, targeted organization size, and functionality. Then the several factors that are important to decision making about OS-ERP packages are discussed: susceptibility to network externalities, prospects for network dominance, customization, quality of the source code, and business model of the OS vendor. These factors will be important to OS-ERP packages crossing the credibility gap. Currently, as companies scour the landscape looking for an ERP package, there are very few open source offerings that have established credibility as reliable, maintainable, and scalable. Once several OS-ERP packages cross this hurdle, more widespread use of OS-ERP can be expected. As well, since the cost of ownership of an ERP package is so complex, marketing OS-ERP packages as more affordable is probably not going to gain much traction or gain the attention of major decision makers in the ERP space. Other features of OS-ERP, like customizability (given an adequate upgrade path) and a strong OS-ERP network, most specifically customers and consulting partners, will be desirable to decision makers. This chapter hopes to enlighten practitioners and academia about the growing field of OS-ERP systems and their role in the international ERP community. There are significant differences in the offerings of open source ERP packages, and theoretically grounded evaluation techniques of OS-ERP are little researched or published. The aforementioned topics are important for creating a clear picture of what is currently on the market and the future directions of the OS-ERP market. Given the data presented in this chapter, it is clear that while OS-ERP is entering the ERP market, OS-ERP is far from a strong player. Also, OS-ERP is currently marketed as a small and medium enterprise option, with some governmental agencies
789
Enterprise Resource Planning Under Open Source Software
also considering its use. Future research should attempt to explore the value gained and experiences of organizations actually using OS-ERP systems.
FUtUrE rEsEArcH DIrEctIONs OS-ERP is a very little researched area. Future research should be performed that rigorously interviews OS-ERP participants about motivation, as this will add to the current literature about OSS motivation (several motivation articles are listed in the Additional Reading section). Many open source software initiatives fill a gap in functionality and thus are interesting to programmers. ERP systems are mature enough to lack such gaps, and thus figuring out what interests programmers that take part in OS-ERP projects could shed more light on the issue of motivation. Also, OS-ERP in the global context is an area ripe for research. Case study research of a large multinational implementation of OS-ERP would inform academia as to how large organizations work with open source communities and whether the functionality offered by OS-ERP systems is adequate in such a setting. Survey research with global companies might shed light on the differences in companies that adopt OS-ERP solutions and those that choose proprietary solutions. Global public sector research is also needed, as the motivations and evaluation criteria for public sector organizations is noted in the popular press to be different from private companies. To build on the ideas in this chapter, the five factors should be included in a survey of those that have adopted and not adopted open source research to determine how these factors impacted the decision, and whether there were future implications of any of these factors on the success of the OS-ERP implementation. Each factor should be further researched in terms of options evaluation in the OS-ERP domain. For example, the author has posited that OS-ERP is closer to custom software than to packaged software. This assertion should
790
be tested through rigorous research. This could be done by looking at how much customization is actually done to OS-ERP packages, and how upgrades and maintenance are handled on these customized systems. ERP customization literature would benefit from such research. This is a partial list of possible future research directions. All of the topics covered in this chapter: factors for evaluating options, global OS-ERP systems, and OS-ERP organizations require much more research for academia to build a complete and coherent picture of OS-ERP and its impacts.
rEFErENcEs Adamson, C. (2004). Java.net: The source for Java technology collaboration. Retrieved November 4, 2006, from http://today.java.net/pub/a/ today/2004/06/01/ofbiz.html Adempiere. (2006). Adempiere: It’s just a community—Nothing personal. Retrieved from http:// adempiere.red1.org/ Apache. (2007). Apache license, version 2.0. Retrieved March 8, 2007, from http://www.apache. org/licenses/LICENSE-2.0.html Benaroch, M. (2000). Justifying electronic banking network expansion using real options analysis. MIS Quarterly, 24(2), 197. doi:10.2307/3250936 Benaroch, M. (2002). Managing information technology investment risk: A real options perspective. Journal of Management Information Systems, 19(2), 43–84. Benaroch, M., & Kauffman, R. (1999). A case for using real options pricing analysis to evaluate information technology project investments. Information Systems Research, 10(1), 70–86. doi:10.1287/isre.10.1.70
Enterprise Resource Planning Under Open Source Software
Bonaccorsi, A., Giannangeli, S., & Rossi, C. (2006). Entry strategies under competing standards: Hybrid business models in the open source software industry. Management Science, 52(7), 1085–1097. doi:10.1287/mnsc.1060.0547 Caton, M. (2006). OpenMFG: ERP basics and more. eWeek, 23, 44-45. Clemons, E. K., & Gu, B. (2003). Justifying contingent information technology investments: Balancing the need for speed of action with certainty before action. Journal of Management Information Systems, 20(2), 11–48. Cliff, S. (2003, 3/6/2003). Survey finds big variation in ERP costs. Computer Weekly, 8. Compiere. (2006a). Andre Boisvert joins Compiere team. Retrieved from http://www.compiere.org/ news/0522-andreboisvert.html Compiere. (2006b). Compiere appoints open source thought leader to its board of directors: Larry Augustin will help drive continued growth of leading open source ERP and CRM provider. Retrieved November 3, 2006, from http://www. compiere.org/news/0724-augustin.html DiMaio, A. (2004). Look beyond TCO to judge open source software in government. Gartner (G00123983). DiMaio, A. (2007). When to use custom, proprietary, open-source or community source software. Gartner (G00146202). Feller, J., & Fitzgerald, B. (2000). A framework analysis of the open source software development paradigm. In Proceedings of the 21st International Conference in Information Systems (ICIS 2000). Ferguson, R.B. (2006). Open-source ERP grows up. eWeek, 23(27), 26-27.
Fichman, R. G. (2004). Real options and IT platform adoption: Implications for theory and practice. Information Systems Research, 15(2), 132–154. doi:10.1287/isre.1040.0021 Fichman, R. G., Keil, M., & Tiwana, A. (2005). Beyond valuation: “Options thinking” in IT project management. California Management Review, 47(2), 74–96. Fisterra (2007). Fisterra.org: A short history. Retrieved March 8, 2007, from http://community. igalia.com/twiki/bin/view/Fisterra/ProjectHistory Fitzgerald, B. (2006). The transformation of open source software. MIS Quarterly, 30(3), 587–598. Gattiker, T., & Goodhue, D. (2004). Understanding the local-level costs and benefits of ERP through organizational information processing theory. Information & Management, 41, 431–443. doi:10.1016/S0378-7206(03)00082-X GNU. (2007a). GNU general public license, version 2, June 1991. Retrieved March 8, 2007, from http://www.gnu.org/licenses/gpl.txt GNU. (2007b). GNU’s not Unix! Free software, free society. Retrieved March 8, 2007, from http:// www.gnu.org/ Gross, P. H. B., & Ginzberg, M. J. (1984). Barriers to the adoption of application software packages. SOS, 4(4), 211–226. Guth, R., & Clark, D. (2005). Linux feels growing pains as users demand more features. Wall Street Journal, (August 8), B1. HK-Software. (2007). HK-Software features and modules, reasons for using AvERP. Retrieved March 8, 2007, from http://www.hk-software. net/h-k.de/content/doc_138099-3-5-1.php Hoover, L. (2006). Compiere is on the move— again. NewsForge The Online Newspaper for Linux and Open Source.
791
Enterprise Resource Planning Under Open Source Software
Krishnamurthy, S. (2005). An analysis of open source business models. In J. Feller, B. Fitzgerald, S. Hissam, & K.R. Lakhani (Eds.), Making sense of the bazaar: Perspectives on open source and free software. Boston: MIT Press. Lancashire, D. (2001). Code, culture and cash: The fading altruism of open source development. First Monday, 6(12). Levin, R., Mateyaschuk, J., & Stein, T. (1998). Faster ERP rollouts. Information Week. Liu, L., & Steenstrup, K. (2004). ERP selection criteria for Chinese enterprises. Gartner (COM22-0114).
Nolan, S. (2005). Knowing when to embrace open source. Baseline, (48), 76. Open Source Initiative. (2005). Open source definition. Retrieved from http://www.opensource. org/ Open Source Initiative. (2006). OSI Web site. Retrieved October 24, 2006, from http://www. opensource.org/licenses/ OpenPro. (2007). OpenPro: The open source ERP software solutions that give you more value and more features. Retrieved March 8, 2007, from http://www.openpro.com
McAllister, N. (2005). You can’t kill TCO. InfoWorld, (August): 29.
Prentice, B. (2006). The advent of open-source business applications: Demand-side dynamics. Gartner (G001412).
McGrath, R. G. (1997). A real options logic for initiating technology positioning investments. Academy of Management Review, 22(4), 974–996. doi:10.2307/259251
Raymond, E. S. (1999). The cathedral and the bazaar: Musings on Linux and open source by an accidental revolutionary. Sebastopol, CA: O’Reilly and Associates.
McGrath, R. G., & MacMillan, I. C. (2000). Assessing technology projects using real options reasoning. Research-Technology Management, 43(4), 35–49.
Rist, O. (2005). Open source ERP. InfoWorld, 27(32), 43–47.
Mozilla. (2007). Mozilla public license, version 1.1. Retrieved March 8, 2007, from http://www. mozilla.org/MPL/MPL-1.1.html MySQL. (2007). The world’s most popular open source database: About MySQL AB. Retrieved March 8, 2007, from http://www.mysql.com/ company/ Nah, F. F. H., & Zuckweiler, K. M. (2003). ERP implementations: Chief information officer’s perceptions of critical success factors. International Journal of Human-Computer Interaction, 16(1), 5–22. doi:10.1207/S15327590IJHC1601_2 Nichols, D., & Twidale, M. (2003). The usability of open source software. First Monday, 8(1).
792
Santos, B. L. D. (1991). Justifying investments in new information technologies. Journal of Management Information Systems, 7(4), 71. Serrano, N., & Sarriegi, J. (2006). Open source software ERPs: A new alternative for an old need. IEEE Software, 23(3), 94–97. doi:10.1109/ MS.2006.78 Soh, C., Kien, S. S., & Tay-Yap, J. (2000). Cultural fits and misfits: Is ERP a universal solution? Communications of the ACM, 43(4), 47–51. doi:10.1145/332051.332070 Soh, C., & Sia, S. K. (2004). An institutional perspective on sources of ERP package-organization misalignments. The Journal of Strategic Information Systems, 13(4), 375–397. doi:10.1016/j. jsis.2004.11.001
Enterprise Resource Planning Under Open Source Software
Soh, C., & Sia, S. K. (2005). The challenges of implementing “vanilla” versions of enterprise systems. MIS Quarterly Executive, 4(3), 373–384. Spinellis, D. (2006). 10 tips for spotting lowquality open source code. Enterprise Open Source Journal, (September/October). Sugar, C. R. M. (2007). SugarCRM: Commercial open source. Retrieved March 8, 2007, from http:// www.sugarcrm.com/ Taudes, A. (1998). Software growth options. Journal of Management Information Systems, 15(1), 165–185. Taudes, A. (2000). Options analysis of software platform decisions: A case study. MIS Quarterly, 24, 227. doi:10.2307/3250937 Watson, R., Wynn, D., & Boudreau, M. C. (2005). JBOSS: The evolution of professional open source software. MIS Quarterly Executive, 4(3), 329–341. Watson, R. T., Boudreau, M.-C., York, P., Greiner, M., & Wynn, D. (2006). (forthcoming). The business of open source. Communications of the ACM.
ADDItIONAL rEADING Al Marzoug, M., Zheng, L., Rong, G., & Grover, V. (2005). Open source: Concepts, benefits, and challenges. Communications of the AIS, (16), 505-521. Benkler, Y. (2002). Coase’s penguin, or, Linux and the nature of the firm. The Yale Law Journal, 112(3), 369–446. doi:10.2307/1562247 Brown, C. V., & Vessey, I. (2003). Managing the next wave of enterprises systems: Leveraging lessons from ERP. MIS Quarterly Executive, 2(1), 65–77.
Crowston, K., & Howison, J. (2005). The social structure of free and open source software development. Retrieved from http://www.firstmonday. org/issues/issue10_2/crowston/index.html Fitzgerald, B., & Kenny, T. (2004). Developing an information systems infrastructure with open source software. IEEE Software, 21(1), 50–55. doi:10.1109/MS.2004.1259216 Gallivan, M. J. (2001). Striking a balance between trust and control in a virtual organization: A content analysis of open source software case studies. Information Systems Journal, 11(4), 277–304. doi:10.1046/j.1365-2575.2001.00108.x Gattiker, T. F., & Goodhue, D. L. (2005). What happens after ERP implementation: Understanding the impact of inter-dependence and differentiation on plant-level outcomes. MIS Quarterly, 29(3). Hars, A., & Ou, S. S. (2002). Working for free? Motivations for participating in open-source projects. International Journal of Electronic Commerce, 6(3), 25–39. Hertel, G., Niedner, S., & Herrmann, S. (2003). Motivation of software developers in open source projects: An Internet-based survey of contributors to the Linux kernel. Research Policy, 32(7), 1159–1177. doi:10.1016/S0048-7333(03)00047-7 Krishnamurthy, S. (2002). Cave or community? An empirical examination of 100 mature open source projects. First Monday, 7(6). Lacy, S. (2005). Open source: Now it’s an ecosystem. Business Week, (October): 7. Lerner, J., & Tirole, J. (2002). Some simple economics of open source. The Journal of Industrial Economics, 50(2), 197. Liang, H., & Xue, Y. (2004). Coping with ERPrelated contextual issues in SMEs: A vendor’s perspective. The Journal of Strategic Information Systems, (13): 399–415. doi:10.1016/j. jsis.2004.11.006
793
Enterprise Resource Planning Under Open Source Software
Mabert, V. A., & Watts, C. A. (2005). Enterprise applications: Building best-of-breed systems. In E. Bendoly & F.R. Jacobs (Eds.), Strategic ERP extension and use. Stanford, CA: Stanford University Press.
Zhao, L., & Elbaum, S. (2003). Quality assurance under the open source development model. Journal of Systems and Software, 66, 65–75.
Madanmohan, T. R., & Krishnamurthy, S. (2005). Can the cathedral co-exist with the bazaar? An analysis of open source software in commercial firms. First Monday, (Special Issue #2). Retrieved from http://firstmonday.org/issues/special10_10/ madanmohan/index.html
ENDNOtEs
Nelson, M., Sen, R., & Chandrasekar, S. (2006). Understanding open source software: A research classification framework. Communications of the AIS, 17(12), 266–287.
1
2
Niederman, F., Davis, A., Greiner, M. E., Wynn, D., & York, P. (2006). A research agenda for studying open source II: View through the lens of referent discipline theories. Communications of the AIS, 18. Niederman, F., Davis, A., Wynn, D., & York, P. (2006). A research agenda for studying open source I: A multi-level framework. Communications of the AIS, 18. Stewart, K., & Gosain, S. (2006). The impact of ideology on effectiveness in open source software development teams. MIS Quarterly, 30(2), 291–314. Watson, R. T., Boudreau, M.-C., Greiner, M., Wynn, D., York, P., & Gul, R. (2005). Governance and global communities. Journal of International Management, 11(2), 125–142. doi:10.1016/j.intman.2005.03.006
3
4
5
6
http://www.sugarcrm.com/crm/SPL-The SugarCRM Public License Version (SPL) consists of the Mozilla Public License Version 1.1, modified to be specific to SugarCRM. Please see the actual license to understand the terms of this license. Intensive and complete functionality for ERP vendors includes the aforementioned functional areas, though the specifics of what is included in each functional area may be termed differently by different vendors (i.e., Oracle and SAP); however, both offer much of the same functionality and this base functionality is what is intended by the author when discussing ERP systems. For details about this functionality, see SAP.com or Oracle.com. See the Free Software Foundation Web site at http://www.fsf.org/licensing/licenses/ for another source for OSS licensing resources. “Forking” is not unusual in open source communities. Forking refers to taking the code in a separate direction, usually a direction not intended by those managing the open source project. There are dissenting opinions as to whether this is cause for alarm for the original open source project. For a review of potential option outcomes, see Fichman et al. (2005). Open source is defined as firms that supply OS-based offerings, even if the offering includes proprietary solutions.
This work was previously published in Enterprise Resource Planning for Global Economies: Managerial Issues and Challenges, edited by Carlos Ferran and Ricardo Salim, pp. 56-76, copyright 2008 by Information Science Reference (an imprint of IGI Global).
794
795
Chapter 3.15
Real Time Decision Making and Mobile Technologies Keith Sherringham IMS Corp, Australia Bhuvan Unhelkar MethodScience.com & University of Western Sydney, Australia
AbstrAct For business decision making to occur, data needs to be converted to information, then to knowledge and rapidly to wisdom. Whilst Information Communication Technology (ICT) solutions facilitate business decision making, ICT has not always been effective in providing the critical “data to wisdom” conversion necessary for realtime decision making on any device anywhere anytime. This lack of effectiveness in real-time decision making has been further hampered by a dependence upon location and time. Mobile technologies provide an opportunity to enhance business decision making by freeing users from complex information management requirements and enabling real-time decision making on any DOI: 10.4018/978-1-60566-156-8.ch016
device anywhere anytime. This chapter discusses the role of mobile technologies in real time decision making.
INtrODUctION As society exits the industrial age and enters the knowledge era, society suffers from data overload, information is lacking, knowledge is scarce and wisdom is wanting (Balthazard, & Cook 2004). Instead of having the right information, presented at the right time in the right way to make decisions, society is epitomised by people spending large amounts of time trawling and sifting through data to try and find what is needed to make decisions (Adair 2007). This practice of searching and sifting through data in an effort to find information poses a huge in-built inefficiency with higher costs and
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Real Time Decision Making and Mobile Technologies
un-assured service delivery. Advances in mobile technology are likely to create further challenges to these searches and sorts as they bring in additional dimensions of location-independence and personalization. The need for the rendering of information in context, as part of work-flow, to any device anywhere anytime to enable real time decision making is the goal of many organisations. The mobile enablement of business, as discussed by Sherringham and Unhelkar (2008a) in a separate chapter in this book, is expected to further drive the demand for real time decision making services (Ghanbary 2006). The emergence of real time decision making, the elements required to achieve that process and the implications of mobile technologies in real time decision making are discussed in this chapter.
DAtA – WIsDOM VALUE stAcK A record in a database, a marketing video or a company’s financial report are all data. Data only becomes information when it is analysed, understood and needed. With the application of experience and skills information becomes knowledge and when such knowledge is applied at the right time in the right way, knowledge becomes wisdom (power /profit). This data to wisdom conversion is illustrated in Figure 1. Figure 1. Data – Wisdom value-stack
The familiar example is a hall porter, who by putting a favourite wine in a hotel room collects a reward. The hall porter takes the data elements (arrival at that hotel, to the appropriate room, at that correct time and with the wine) and because the elements are required and understood they are information. Context is given to the information (managing the relationships between pieces of information and used with work-flow) to achieve knowledge, which is then applied at the right time in the right way to realise a profit (Ghanbary and Arunatileka 2006). The business imperative of the data – wisdom conversion has been widely noted, e.g. Macmanus et al. (2005), but the significance of the value stack lies in the importance of providing context through managing relationships between pieces of information and by the integration of information with work-flow. Within mobile business, the significance of location provides additional context to the information. Resolution of the data – knowledge conversion allows the right information to be presented at the right time in the right way to the right audience, providing two advantages. Firstly, the need for users to have advanced information management skills to complete the most rudimentary of tasks is reduced. Secondly, the difficulty of managing information on the small screen of current mobile devices is removed. Resolution of the data– knowledge conversion to service both business and mobile business will allow mobility to realise its true significance through the provision of real time decision making (Raton 2006) and provide business with a major competitive advantage (Ekionea 2005).
ELEMENts OF rEAL tIME DEcIsION MAKING With real time decision making, our favourite restaurants can bid in real time to achieve our patronage on any device anywhere anytime. The information needed to take the best route home
796
Real Time Decision Making and Mobile Technologies
is supplied dynamically as the road is traversed and everything that is needed to make a foreign exchange trade is rendered to any device anywhere anytime for decision and execution (Gupta 2006). The key elements of real time decision making are summarised in Figure 2 and are discussed as follows. Consolidated Repository – Information for real time decision making is accessed from virtual consolidated repositories that combine spatial data, database data, transactional data and documents (including images). To stop the duplication of effort, information is single sourced from virtual consolidated repositories. Within these repositories, content is separated from presentation and from mechanism of delivery. Archiving, backup, recovery and version control, are all performed on the repositories on behalf of the user, freeing up both the end user and the enddevice (Sherringham 2005). The benefits of distributed computing power shall continue within the mobile computing environment but unlike the desktop environment where data were trapped locally, consolidated data storage ensures that data are single sourced and that data are not stored for extended periods of time on the end device (Sherringham 2008). To effectively support mobile computing, large centralised databases shall be replaced with virtual consolidated contextual information bases (Yang and Wang 2006). Information Relationships – To achieve knowledge and to make decisions, different
pieces of information often need to be drawn together, i.e. the relationships between elements of information need to be provided to give context. Information relationships are managed using a metadata framework that provides all of the supporting details to give context and by linking it to steps in a work-flow. A metadata framework includes classification schema, versioning details, role based access, security and privileges, as well as device specific information and spatial needs. Use of metadata to manage information relationships is critical in the data to knowledge conversion and is a prerequisite to the provision of more sophisticated mobile business services. Work-flow – By defining and integrating information to process, knowledge can be presented in context for real time decision making (Ekione and Abou-Zeid 2005). The user is taken through a series of recipes (process and information combined in sequence - recipes) to realise the required outcomes. The combination of information relationships and work-flow together provide context and allows the conversion of data into information and to knowledge. The challenge is the development and maintenance of the standard recipes by audience and of the virtual consolidated contextual information bases. When providing mobile business services and products, it is ease of use that influences user acceptance and product adoption. The use of clearly defined recipes to guide the user to the assured outcome is part of the mobile business solution.
Figure 2. Elements of real time decision making
797
Real Time Decision Making and Mobile Technologies
Unified search – Search shall be unified across all data types and tightly integrated with work-flow. Search harnesses the information relationships to provide context and uses rolebased access and other usability information to provide focused results. The effectiveness of a search capability is a pivotal tool to the provision of services on mobile devices. Artificial intelligence – Work-flow, search and information relationships are all expected to merge into one layer that is artificial intelligence. It is the presence of artificial intelligence that will liberate the end user and empower mobile business. Security – As part of the overall security issue, a metadata framework facilitates the security provision and management because of the application of user access rights to the specific information elements. Messaging – Real time decision making requires a unified messaging environment to combine voice, data, text, images and video. To guarantee service delivery the messaging environment needs to be architected around the FedEx model, where the quality of hand-off, the message, the delivery and storage of the message are all separated and utility infrastructure underpins the message processing. Supporting the messaging environment are the device and asset management functionality and capability, including locational information. Tagging of messages for application specific
processing or archiving for compliance shall also be part of the messaging environment. Presentation – With a resolution of the information to be provided, the provision of context and work-flow, the remaining element is the presentation to the end of device. The interface will consist of a series of intuitive icons that provide the user with the required business functionality. Invoking an icon changes the interface to reflect the task on-hand with the work-flow embedded. The ability to customise and personalise the interface would also exist, together with an integrated search (Sherringham and Unhelkar 2008b).
Ict INDUstry trENDs AND rEAL tIME DEcIsION MAKING Involving telecommunications companies, hardware and software suppliers, content providers and consulting services, real time decision making on any device anywhere anytime is an emerging trillion dollar business opportunity that will underpin business operations. The emergence of real time decision making from the evolving use of the Internet is shown in Figure 3. Brochureware remains a major use of the Internet and will transition to mobile devices. Although the Internet is increasingly being used to support transaction processing, many opportunities for development still exist because the trans-
Figure 3. Evolution of the Internet and the emergence of real time decision making
798
Real Time Decision Making and Mobile Technologies
action processing capability is still in its infancy and advanced and complex transaction processing is currently not well supported. Basic transaction processing is starting to be seen on mobile devices but complex transaction processing requires some significant changes in business processes and resolution of security and authentication before a wide adoption on mobile devices occurs. The emergence and growth of collaboration is the focus for much of the ICT industry and is increasingly providing other business opportunities. Collaboration includes Web 2.0, with social networking, business networking and the need for entertainment. The greater use of the Internet for messaging and the convergence of telecommunications with the computer are also aspects of the collaboration stage in the evolution of the Internet and mobile business. Whilst mobile devices will play a significant role within collaboration, the maturity of business processes and of management frameworks remains an issue for business collaboration. The natural extension of collaboration is real time decision making. Effective collaboration requires that the right information be shared with participants so that informed decisions can be made and then executed. With the provision of information in context linked to work-flow, real time decision making promises to add significantly to business capabilities. Whilst many business talk about collaboration and the provision of collaboration services, the value lies in real time decision making because of its higher value and range of opportunities. Compared with the Internet wave that swept through society when Web sites and the Internet came to the fore, the changes seen in society as a result of real time decision making will transform humanity. Strategically aligning both business and ICT now to support and adopt real time decision making is required. Real time decision making brings together some key and innovative technologies (Table I) whilst providing a unified approach to software
development, enterprise architecture, business process and information management (Alag 2006). Iconic interface – Currently, an iconic interface is the most effective approach for providing a common and intuitive interface on the small screen of a mobile device through to the more spatial screen real estate of the plasma television set. Iconic interfaces are increasingly used for many features of applications with the display automatically changing to reflect those most relevant features to the needs of the user, e.g. Apple’s I-phone and the latest release of Microsoft Office. The ability to personalise the interface and to be independent of location are extra features to be seen within mobile devices. Features to functionality - Whilst the trend for more and more features to be supplied within software applications that are used by fewer people is still strongly present, this trend is set to change because of the need for a functionally driven iconic interface on mobile devices. An iconic interface that includes work-flow and which changes to reflect the functionality required by a user to achieve an outcome, removes the user from the need for intimate knowledge of feature driven software. The successful software of the future delivers functionality and NOT features. Application consolidation - The use of an iconic interface that invokes functionality instead of features creates a blurring in the need for distinct killer applications because it is about bringing elements of functionality together to meet a need. The focus will no longer be on launching an application to complete a task but it is about conducting a task and bringing together the functionality necessary to achieve the required outcomes. Whilst Microsoft’s use of Outlook as a master application to drive desktop functionality is another step in the seamless integration of applications, this approach is still evolving. Microsoft PowerPoint allowed people to be a presenter. Microsoft Word allowed people to be a typist and Microsoft Excel allowed people to be accountants. It was Microsoft’s close linking
799
Real Time Decision Making and Mobile Technologies
Table I. Summary of ICT trends impacting upon real time decision making Trend
Description
Opportunity
Iconic interface
Iconic interface of business functionality that includes workflow that changes to meet user need. Spans both the desktop and mobile devices.
Control the framework that manages the interface and control access to the rest.
Features to functionality
Move away from feature rich software used by few people to functionality driven applications used by many people across all devices.
The software of the future is development of functionality driven software, including work-flow, as object of functionality, delivered as required to any device.
Application consolidation
Focus is not on application specific software but bringing together objects of functionality. The killer applications that standardised the desktop will be replaced by integration of elements of functionality rendered on any device.
Standardisation and dominance over mobile computing, desktop computing and enterprise computing shall come from standardised seamless integration of elements of functionality.
Sound practice
Return to consolidated data storage and work-flow; separation of content from presentation and mechanism of delivery and use of smart end device with the load being taken by the server.
Position now and implement best practices in future developments ready to support mobile computing.
Software as a Service (SaaS)
Real time decision making is a natural extension of SaaS; with only those services required being rendered as needed.
Real time decision making on mobile devices will drive the definition of new SaaS opportunities.
End device & operating system
In the emerging mobile market, the de-facto standard for hardware and end device operating system are still to be realised.
Own the operating system of the mobile device and define the standards for the end device.
Data storage
Increasing demand for consolidated data storage is required to support mobile business.
Standardise and become the market leader in the development and provision of global virtual consolidated databases that Google has already started.
Consolidated contextual information bases
The advances required are in information in context integrated with work-flow – contextual information bases. These will be virtual consolidated information bases.
The software to support consolidate contextual information bases requires a new generation of database software that will underpin access from millions of users on mobile devices.
Context based searching
Google became a billion dollar company searching data in an effort to find information. Searching in context is the evolving opportunity.
Realsie the value in searching and managing knowledge. Realise the knowledge utility.
of killer applications with its operating system that allowed for standardisation of the desktop environment and led to market dominance. In the world of real time decision making and mobile devices, it is the seamless integration of elements of functionality that shall lead to standardisation and dominance over mobile computing, desktop computing and enterprise computing (Armstrong 2006). Return of sound practices – In the mainframe environment, the end device was dumb and everything was done centrally on a mainframe. The introduction of the PC realsied the benefits of distributed computing power and of an intelligent
800
end device but also some bad trends occurred. Firstly, the end device was burdened with more and more applications which has caused update and coordination problems. Secondly the benefits of consolidated data storage and work-flow were lost. Furthermore, content was no longer separated from presentation and mechanism of delivery. Through real time decision making and the demands from mobile business, some of the best practices can be returned to ICT (Kaliszewski 2006). To work effectively, the end device (mobile device) will be smart but it cannot become overloaded like the desktop PC. The bulk of the work will be done at the server end with results
Real Time Decision Making and Mobile Technologies
only being displayed on the mobile device and the prompt for the next stage of the process. The mobile end device remains simple and easily managed. Using mobile devices to access contextual information from consolidated contextual information bases means that the benefits of consolidated data storage linked to work-flow shall be seen within mobile business. The separation of content from presentation and mechanism of delivery sees the return of another good practice and a requisite for real time decision making is implemented. In addition, real time decision making requires that the business logic and processing rules (Lucas 2005) be stored in contextual databases and not in the source code – a return to good coding practices. Software as a Service (SaaS) – The provision of Software as a Service is in its infancy and the mobile opportunities for SaaS are still to be realised. Much of SaaS is still very application specific and there have been concerns about the quality and applicability of code supplied with SaaS. Real time decision making and the elements of business functionality in the interface is a natural extension to current SaaS. Mobile business will provide many new opportunities for SaaS. End device and operating system – Like any other market, the evolution of the desktop environment led to a highly diversified market with many players in hardware, software and killer applications. As the market evolved and matured, standards came into effect and it is this standardisation which drove the consolidation and the creation of market dominance by a few key players. Within the emerging mobile business market, the de-facto standard for hardware, both server and end device, is still to be defined. The operating system, for both the server and end device, is also in need of standardisation. Market dominance comes with being the de-facto standard. The opportunities that come from standardisation in the mobile market are more extensive because of the convergence of the mobile phone with the laptop
computer, with television, with the gaming console and with the music and video player. Whilst the PC environment was characterised by an integration of killer applications with an operating system that led to standardisation, the mobile business environment is different. Within the mobile business environment, there is a decline in the importance of killer applications but a greater significance in elements of business functionality. Close integration of elements of business functionality through an iconic interface with the mobile device operating system is the path to standardisation and market dominance for software vendors. Data storage – The demand for data storage capability is set to rapidly increase. It is not so much in the storage of transaction data and documents that will further significantly pressure data storage but it is the growth in imaging for work-flow, results for simulation, images from surveillance and services for entertainment. The storage will not be on the end device but on the server in global virtual consolidated databases (Raisinghani 2006). Consolidated contextual information bases – Real time decision making and mobile business depends upon the use of contextual information bases to underpin its operation. These databases shall function along distributed lines and shall be viewed as one virtual global consolidated information base. The database software to support consolidated contextual information bases requires a new generation of object orientated databases. Whether it is a user on a mobile device or a wireless napkin holder, these consolidated contextual information bases will service millions of devices. Context based searching – Contextual based searching is key to the operation of real time decision making. The leading search capabilities present currently are not only data type specific, e.g. documents or transaction data, but because of the lack of context and work-flow, the searching is not contextual. Development of contextual based
801
Real Time Decision Making and Mobile Technologies
searching provides any given business with an unprecedented business opportunity and control over mobile business globally. Artificial intelligence – Contextual based searching, the management of information relationships and linking to work-flow are necessary for real time decision making but the challenge is the sheer volume of data that currently exists, plus that which will evolve going forward. It is simply not possible for humans to classify and context this volume of information by audience; nor meet the dynamic nature of information and the evolving needs. Artificial intelligence shall be used. Contextual based searching, the management of information relationships (metadata) and workflow shall all merge to form a layer of artificial intelligence (Figure 2). The existing use of pattern matching and predictive capabilities in artificial intelligence shall be expanded upon to include perceptive and awareness capabilities. Artificial intelligence shall initially come to the fore in error and exception handling for routine transaction processing because the majority of information accessed in business is as part of routine transaction processing and it is standardised transaction processing that underpins business operations (Moonis 2006). The potential applications of artificial intelligence within mobile are almost limitless and with artificial intelligence present on all hand held devices, all fixed devices (napkin holders, fridges, PCs and TVs), the market opportunity is unprecedented.
FUtUrE DIrEctION Even though real time decision making is still emerging and there is much work to be done to see its full realisation, some future trends are already starting to be identified. Real time decision making is an emerging trillion dollar industry that will evolve over the next 10-years and collaboration between key players is how the opportunity will
802
be realised. Whilst a unified approach may currently be absent from the industry, the business opportunity is too great for the collaboration not to occur. Whilst the demands of mobility shall be the major driver in realising real time decision making on any device anywhere anytime, it is the telecommunications network that underpins real time decision making. Major telecommunications companies like Verizon and British Telecom are deploying fibre optic networks to support not only an integrated communications environment but also the whole scale future transmission of data. Whilst the mobile dominates the last step to the end user, the bulk of data traffic movement shall remain the fibre optic backbones. The future shall see an operating system layered on the routers and fibre optics of the telecommunication network to form one global virtual mainframe to support real time decision making. Whilst the emergence of artificial intelligence is the critical layer in the solution that supports real time decision making (Figure 2), at least one more layer is to still to be developed, that of voice. Gone will be the days of data entry and management through a keyboard, voice shall be the key mechanism. Other bio-recognition solutions are also expected to be developed. The ICT industry has seen standardisation of the operating system and of application. Standardisation has occurred at the desktop and is being seen at the enterprise level. Whilst standardisation of the mobile device is still to be realised, the real opportunity is standardisation of the marketplace. Business operates in market places, e.g. banks and standards to operate globally, and the opportunity to standardise the marketplace ICT awaits (the emergence of marketplace computing). Access to information shall become a consumer right in the knowledge era. In the knowledge era, information shall underpin society and like power and water, the Internet and knowledge will be a utility. Of all the utilities (gas, water or electricity), and of all the infrastructures (roads, ports, rail or
Real Time Decision Making and Mobile Technologies
communications), the knowledge utility will be the most demanding and the most valuable.
cONcLUsION Living in the knowledge era provides many opportunities and rewards to the individual, to business and to society. For humanity to realise its true potential in the knowledge era, users need to be freed from the need for advanced information skills and empowered by real time decision making on any device anywhere anytime. Information in context, sourced from virtual consolidated contextual information bases and integrated with work-flow for delivery through a common interface across devices is what is required. Artificial intelligence is how contextual based searching is achieved and how the relationships between information managed to give context. Real time decision making is a trillion dollar business opportunity that is set to evolve over the next 10-years and to become the de-facto industry standard. Whilst the demands of mobility shall drive the evolution of real time decision making, it is the benefits derived in routine business transaction processing that will be the initial incentive for realisation. Beyond this, however, lies the moral responsibility for realising real time decision making for the betterment of humanity because knowledge is freedom and knowledge is the liberator from poverty and tyranny.
rEFErENcEs Adair, J. E. (2007). Decision making & problem solving strategies 2nd edition, Philadelphia: Kogan Page
Alag, H. S. (2006). Business Process Mobility. In Unhelkar B. (Ed.), Handbook of Research in Mobile Business: Technical, Methodological and Social Perspectives. Hershey, PA, USA: IGI Global Armstrong, M. (2006). A handbook of management techniques: a comprehensive guide to achieving managerial excellence and improved decision making, Rev. 3rd ed. London: Kogan Page Balthazard, P. A., & Cook, R. A. (2004). Organizational Culture and Knowledge Management Success: Assessing the Behaviour-Performance Continuum. Paper presented at the Proceeding of the 37th Hawaii International Conference on System Sciences, Hawaii, USA. Ekionea, J. B., & Abou-Zeid, E. (2005). Knowledge Management and Sustained Competitive Advantage: A Resource-Based Analysis. Paper presented at the IRMA Conference, SanDiego, USA. Ghanbary, A. (2006). Evaluation of mobile technologies in the context of their applications, limitations and transformation. In Unhelkar B. (Ed.), Chapter 42 of book: Handbook of Research in Mobile Business: Technical, Methodological and Social Perspectives. Hershey, PA, USA: IGI Global Ghanbary, A., & Arunatileka, D. (2006). Enhancing Customer Relationship Management through Mobile Personnel Knowledge Management (MPKM). Proceedings of IBIMA International Conference. IBIMA 2006. Bonn, Germany. 1921 June. Gupta, J. N. D. (2006). Intelligent Decisionmaking Support Systems Foundations, Applications and Challenges. London: Springer-Verlag London Limited Kaliszewski, I. (2006). Soft computing for complex multiple criteria decision making. New York, NY: Springer
803
Real Time Decision Making and Mobile Technologies
Lucas, H. C. (2005) Information technology: strategic decision making for managers. Hoboken, NJ: Wiley Macmanus, D. J., Snyder, C. A., & Wilson, L. T. (2005). The Knowledge Management Imperative. Paper presented at the IRMA Conference 2005, San Diego, USA. Moonis, A. (2006) Advances in Applied Artificial Intelligence. Proceedings of 19th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/ AIE 2006, Annecy, France, June 27-30. Berlin Heidelberg: Springer-Verlag GmbH. Raisinghani, M. S. (2006). M-Business: A Global Perspective. In Unhelkar B. (Ed.), Chapter 31 of book: Handbook of Research in Mobile Business: Technical, Methodological and Social Perspectives. Hershey, PA, USA: IGI Global. Raton, B. (2006). Autonomous mobile robots: sensing, control, decision-making, and applications. FL: CRC/Taylor & Francis Sherringham, K. (2005). Cookbook for Market Dominance and Shareholder Value: Standardising the Roles of Knowledge Workers. Athena Press: London. (p. 90). Sherringham, K. (2008). Catching the Mobility Wave Information Age April-May 2008 (p. 5). Sherringham, K., & Unhelkar, B. (2008a). Elements for the Mobile Enablement of Business. In (Unhelkar et al. 2008) Handbook of Research in Mobile Business: Technical, Methodological and Social Perspectives – 2nd Edition, IGI Global. Sherringham, K., & Unhelkar, B. (2008b). Business Driven Enterprise Architecture and Applications to Support Mobile Business. In (Unhelkar et al. 2008) Handbook of Research in Mobile Business: Technical, Methodological and Social Perspectives. 2nd Edition, IGI Global
Yang, C. C., & Wang, F. L. (2006). Information Delivery for Mobile Business: Architecture for Accessing Large Documents through Mobile Devices. In Unhelkar B. (Ed.), Chapter 18 of book: Handbook of Research in Mobile Business: Technical, Methodological and Social Perspectives. Hershey, PA, USA: IGI Global.
KEy tErMs AND DEFINItIONs Activity Objects: A series of objects invoked from a standard iconic interface that provide business functionality because they contain the necessary content, images, business logic, processing rules, work-flow and presentation rules. Contextual Search: Searching of information in context so that useful results are obtained. FedEx Model: A model for the operation of a unified messaging environment based on the proven principles to move messages (parcels) around the world by leading logistics companies. Information Bases: The next generation of databases but instead of storing data, information is stored in context. Information Relationships: The associations between elements of information to provide context and convert information to knowledge. Knowledge Utility: Like power and water, knowledge shall become a utility infrastructure that underpins humanity. Marketplace Computing: Standardised computing (tightly integrated hardware and software) operating at the marketplace level to allow businesses to interact effectively in a marketplace. Business currently standardises at the enterprise level but to operate effectively, standardisation shall be at the marketplace level. Real Time Decision Making: The provision of information in context and integrated with work-flow in real time to any device anywhere anytime is needed so that decisions can be made.
This work was previously published in Handbook of Research in Mobile Business, Second Edition: Technical, Methodological and Social Perspectives, edited by Bhuvan Unhelkar, pp. 173-181, copyright 2009 by Information Science Reference (an imprint of IGI Global). 804
805
Chapter 3.16
Business Driven Enterprise Architecture and Applications to Support Mobile Business Keith Sherringham IMS Corp, Australia Bhuvan Unhelkar MethodScience.com & University of Western Sydney, Australia
AbstrAct Information Communication Technology (ICT) needs to provide the knowledge worker with an integrated support system of information management and work-flow. This challenge, however, is further exacerbated in mobile business wherein the knowledge work is not identified with a particular location. Information systems need to be analyzed and modeled, keeping the location-independence of the users in mind. A Model Driven Architecture (MDA) approach, aligned with Object-Orientated Design principles, and driven dynamically as the user interacts, has immense potential to deliver solutions for the systems used by the knowledge worker. An MDA approach provides a unified DOI: 10.4018/978-1-60566-156-8.ch021
approach to solutions architecture, information management, and business integration. At the enterprise level, the desktop, the mobile device and at the emerging marketplace level, the evolving need for real-time decision making on any device, anywhere, anytime, to support mobile business is providing a framework for aligning ICT to business. Further details are presented in this chapter together with some of the challenges and opportunities to be seen within mobile business.
INtrODUctION Enterprise architecture, application development and requirements gathering have all faced a common problem, that of the business environment being highly dynamic and continuously evolving.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Business Driven Enterprise Architecture and Applications to Support Mobile Business
An application that worked is often quickly in need of revision and an existing infrastructure readily looses its performance advantage because business needs are continually changing. Although the demands of mobile business are adding another level of complexity to application development and enterprise architecture, the mobile enablement of business (Sherringham and Unhelkar 2008a) provides a convergence of events to realign Information Communication Technology (ICT) as the assembly line for knowledge workers. Further recognition of ICT as a utility infrastructure and all of the utility principles underpinning design, operation and management of ICT can also be realised in the mobile enablement of business. The significance of a business focused approach, driven by how the customer interacts, will also be championed during the alignment of ICT to meet mobile business (Lan and Unhelkar 2005). Using the demands of mobility, this chapter discusses the alignment of enterprise architecture and application development to meet current and future needs and how the resulting need for real time decision making will shape some key trends in the ICT industry.
rOLE OF KNOWLEDGE MANAGEMENt IN MObILE bUsINEss Through the application of proven business principles, business has standardised catering, cleaning, farming, minerals extraction and manufacturing. The last great challenge is the standardisation of knowledge workers to lower costs and assure guaranteed service deliver (Sherringham 2005). This need for standardisation and the resolution of information management and work-flow becomes more pressing when the needs of mobile business are considered (Sherringham 2008). This situation portrayed in Figure 1 often occurs in organisations, where a Customer contacts a Service Representative who is faced with querying multiple disparate backend systems to find the required information to respond to the Customer’s request. The Service Representative may not find what they want, so they have a discussion with a co-worker who tries to do the same thing and who may bring in another co-worker. In the mean-time, the Customer gets frustrated and approaches another Service Representative who goes through the same process. Add to this the duplication between Internet and Intranet,
Figure 1. Hidden costs of knowledge management present in the enterprise
806
Business Driven Enterprise Architecture and Applications to Support Mobile Business
disparate Web sites and the sending of e-mails that are not coherently managed and an in built hidden cost with a failure to guarantee service delivery is seen. ncumbent within the desktop environment and within many enterprise architectures is the isolation of data in disparate silos with a resulting duplication of effort. A scarcity of context for the information and a lacking of integration with work-flow further increases hidden costs because of the time spent trying to find information. The demand by customers for mobile business services and because of the constraints imposed by mobile devices, a redefinition of enterprise architectures and an optimisation of the desktop environment shall result.a The small screen size inherent in current mobile devices means that if mobile business services are to be provided and accepted by the user, all of the information management currently required will need to have occurred before delivery to the mobile device. Mobile business will drive the implementation of real time decision making. Instead of users searching and sifting through information, the right information is presented at the right time in the right way to allow decisions to be made, e.g. our favourite restaurants bid in real time to achieve our patronage on any device anywhere anytime (Sherringham and Unhelkar 2008b). The demand for real time decision making from mobile business is expected to be one of the main drivers for the provision of mobile business services, resolution of information management and for the realignment of ICT to support business needs.
ALIGNMENt DrIVErs FOr Ict WItHIN MObILE bUsINEss Having the correct enterprise architecture is an enterprise’s key business system, information / data, application, technology strategy and it impacts
on their business processes and users (Cummins, 2002). Through defining the ICT necessary to support knowledge workers as an assembly line for knowledge workers and by addressing the issue of integration of information with work-flow (driven by how the customer interacts), the necessary elements of enterprise architecture can be readily defined and the necessary integration required determined. This assembly line approach leverages the modelling capabilities of Model Driven Architecture to develop platform independent models and solutions (McGovern et al. 2004). In addition to assembly techniques, there are several other principles that have been have been standard engineering practices for many years which can be brought to ICT, applications and enterprise architecture to support mobile business and align information management with work-flow: •
•
• •
Market maturity: The maturity of the market in which mobile business is occurring Business dynamics: How a business responds to the forces of markets, customers, suppliers and legislation. Business maturity: The maturity of a business in the application of ICT. Utility infrastructure: ICT as a utility infrastructure for mobile business.
Market maturity – As mobile business offerings are brought to market, businesses will operate in an emerging market (size, share or offering). In such emerging markets there are few standards and the market is highly dynamic. Solutions need to be rapidly developed and quickly changed to support growth and product diversification. As markets mature, product diversification is required, specialist needs arise and standards start to develop. Change becomes less prevalent and the focus moves to assured delivery and scalable growth.
807
Business Driven Enterprise Architecture and Applications to Support Mobile Business
In highly mature markets, standards dominate, e.g. ATMs or air-craft. Government compliance is stringent and only a few players can effectively compete. In mature markets, utility infrastructure is the order of the day. A different level of enterprise architecture is required to operate in all of these markets and to supporting evolving mobile business. Business dynamics – Even within a market and its sector and segment, business needs are dynamic and are not uniform. Business is driven by market forces, government legislation, customer demand and costs (Figure 2) and although a lot of commonality of function exists across business, different types of business and different areas of business have differing needs. Markets are often highly dynamic and where business is heavily impacted by market trends, the need for dynamic real time information and fast updates prevail. Mobile business offerings that support dynamic markets are often high volume in nature with a strong focus on supporting real time updates. Legislative changes are often slow but regularly have a significant impact, e.g. Sarbanes Oxley Act (Bowersox et al. 2007). Less dynamic and more considered solutions with extensive audit capabilities are required to support legislative needs. The ability to record mobile business transactions Figure 2. Business factors impacting enterprise architecture and solution design in mobile business
and reconstruct events impact upon the solution design considerations. Customers are highly dynamic and are often very demanding. New customers are added regularly to systems and products are quickly shipped in response to demand. Rapid and frequent updates are required to ensure currency of information to service customers. Mobile business solutions that support customers often require sustained and frequent network connectivity with user authentication and transaction validation. Standing orders and long term contracts are often in place with suppliers and much of the supply process is automated. The ability to place a routine order from a mobile device and to track delivery requires connectivity to the network but frequent updates are not required. Business maturity – Within an organisation, different areas of business are at different levels in their respective markets, in their mobile business enablement and in their ability to apply ICT to business. This diversity is a powerful tool for business growth, enterprise architecture alignment and in application development because it provides an upgrade path for application sophistication, business maturity and mobile enablement. By looking at the next level of performance up from current operations, the goal of achieving that level of performance and operation can be set and achieved without the need to reinvent the wheel. Successive levels of performance can be progressively realised (Figure 3). Utility infrastructure – With ICT being an assembly line for knowledge workers and its critical role within business and mobile business, ICT plays the role of a utility infrastructure with the following principles included within its design: •
•
808
Redundancy: Surplus capacity is included and protected and available to readily scale. Fail-over: Include self-initiation and selfconfiguration should fail-over occur.
Business Driven Enterprise Architecture and Applications to Support Mobile Business
Figure 3. Progressive approach to business maturity and application sophistication
• •
•
Load bearing capacity: Capability to bear load throughout all parts of the solution. Multiple layers of safeguard: Assumes failure will occur, single points of failure are avoided and are not aligned. Simple: Solutions are kept simple and are highly standardised and modularised.
The following design considerations are also catered for within utility infrastructure and will be required to support mobile business: •
• •
•
Accommodates change: Change is the norm and are designed to accommodate through automatic configuration. Achieve scalability: If it can not be automated, it is not scalable. Best of breed: Best of breed is brought together to provide an assembly line for the processing of jobs. Form an emergent behaviour: Standardised components do what they do best and the resulting emergent behaviour delivers an industrial strength utility solution.
UsEr INtErActION AND rEAL tIME DEcIsION MAKING tO ALIGN Ict FOr MObILE bUsINEss With a strong business focus inherent to the design and application of ICT and a recognition of the drivers impacting design, a methodology for delivering a scalable Service Orientated Architecture (Soley et al. 2000) that meets current and future mobile business needs can be established Figure 4. The process starts with a common interface that crosses platforms and devices and which incorporates work-flow to provide context. Such an interface can be readily designed and applied by users at all levels of business. Using the visual elements of the interface to drive requirements and process definition, non-technical resources can conceptualise the required business functionality and with clarity of vision comes a well defined scope and a clear expectation. The user interface of Apple’s I-phone marks an evolution in the type of interface that is required for supporting mobile business and real time decision making on any device anywhere any time. The interface is clean, simple and uses self-explanatory icons that when invoked, provide the required functionality. The extension of the Apple I-phone approach is to build in the required work-flow to Figure 4. ICT to support mobile business solutions
All of these principles impact upon the enterprise architecture, the design of applications and the ability to provide mobile business services.
809
Business Driven Enterprise Architecture and Applications to Support Mobile Business
conduct business into the interface. Rather than having icons launching software applications (the desktop) or control elements (adjust volume on an I-phone), they launch objects of business functionality, e.g. pay an account. An iconic interface has other advantages including the use of icons is intuitive and spans languages; the interface can be readily customised to reflect branding and personal preferences; and the interface presents a common environment across different devices and operating systems. One other advantage that becomes more significant to mobile business is that an end user defined and process driven iconic interface would also eliminate the need for specific applications because it is about presenting the required elements of functionality only irrespective of where it resides within a software application suite. An interface consisting of a series of icons that invoke business functionality can be created in real time to reflect specific needs i.e. the interface automatically refreshes to reflect the changing activities being undertaken by a user. The interface would also include the supporting elements to complete an activity, i.e. seamless interface into searching and messaging, whilst connecting to other tasks without the need for cumbersome menu driven navigation. The interface embeds business logic and supports standardised processes. Whether the icon is on a desktop PC or a mobile device, clicking an icon would invoke
the appropriate activity object that delivers the required business functionality, e.g. those for a sales process (Figure 5A). As its name implies and to be discussed subsequently, an activity object is an object of business functionality that when invoked, implements the task required off it. An activity object contains images, data, processing rules and business logic necessary to complete the purpose it has been created for. Invoking one activity object would implement the other activity objects required for the process. Using the sales process example, objects for prospecting, product details, account details, contact details, events management and financials could all be included. Activating the account object (Figure 5B) would initiate a series of other objects, e.g. credit management, service management, account update, company and contact details, account creation and update, and guarantees and warranties. An object hierarchy exists and this extends from the interface through the hierarchy into the event specific details and finally, the underlying code. This activity object approach allows common objects of functionality to be established, e.g. banking. A hierarchy of activity objects exists to reflect standardised processes and these activity objects are drawn together and presented as required. These activity objects can be included in multiple processes and accessed from within and across enterprises on any device.
Figure 5. Definition of activity objects: A) Sales object, B) object hierarchy, C) object structure
810
Business Driven Enterprise Architecture and Applications to Support Mobile Business
Any activity object (Figure 5C) is made of the following: •
•
•
•
•
•
Content: The content that an activity object needs to use and process. Content shall be for both the activity objects themselves and their functionality. Images: The images that an activity object needs to use and process. Images are for both the activity object itself and for its functionality. Business logic: This is the actual business logic an activity object uses for completion of its processing task. Processing rules: These are the rules that define how an activity object operates, i.e. what information, images and business logic an activity object needs to process. Work-flow: This is the work-flow in which an activity object resides, i.e. the context of the activity object. This allows users to work their way through a process to the required outcome. Presentation rules: The rules that go to create or make an activity object, i.e. what information, images and business logic an activity object needs to function and how the activity object functions.
Supporting the activity objets is a series of virtual contextual information bases that allow the activity objects to function. •
•
•
Information databases: Information that provides the required knowledge within an activity object. Information relationship databases: Manage informational relationships that transforms information to knowledge. Rule processing databases: Providing the rules for work-flow, business logic and processing of the information, i.e. a recipe compiler.
•
•
•
Activity object databases: Database for the configurable rules, images, etc. for the activity objects that flow through into the interface. Exceptions databases: Details on how to handle exceptions for a given activity object in a given circumstance. Messaging databases: Handling the rendering of information to multiple devices irrespective of location and time. This includes device configuration and management information.
This simple approach of defining activity objects from the user interface into the enterprise architecture has many advantages including: •
• • • •
Solutions are driven by the end user as the customer interacts – gain their buy in and support. The required information and businesses processes are readily determined. The solutions required to support operation are defined. The required enterprise architecture and applications are determined. Simply rendering activity objects to an end device with all of the required elements ready for implementation has many advantages to the implementation of mobile business.
The use of activity objects has the following impacts: •
Seamless iconic interface: The interface is a series of icons managed within one coherent framework which invoke functionality. The interface encapsulates the required work-flow which integrates and refreshes as required. Such an interface works on any device (desktop or mobile) providing ready business functionality, intuitive use and ease of training.
811
Business Driven Enterprise Architecture and Applications to Support Mobile Business
•
•
•
•
•
812
Business driven functionality: The activity objects and their hierarchy can be readily defined and reflect business operations. The activity objects drive outcomes and can be integrated to support many business operations. A single set of activity objects can be determined and used on both mobile devices and the desktop. Definition of work-flow: Business process and the information required at each step are defined as the customer interacts to realise the outcome. Contrast this with the current practice of designing processes around feature driven software. This functional approach is well suited to mobile devices because of their small screen size. Clear requirements: Many development projects suffer because requirements are not clearly defined. By defining the required processes and objects necessary to deliver outcomes, requirements are clearly defined. In contrast with many ICT projects, mobile business services can now be brought to market on time and to budget. Application development: Application development changes from feature driven and a series of specialised applications to a series of activity objects that are linked together within a common framework as required. Developers no longer code applications, they code object functionality which are brought together to deliver outcomes as required by the user. Standardised activity objects: Elements of business functionality are common across many areas of business operation and types of business. When using activity objects, banking functionality is no longer tied to accounting and financial applications; elements of banking functionality become available as needed. This use of activity objects allows for standardised elements to be developed and delivered across business, markets and devices.
•
•
•
•
Resolution of information: An iconic interface traps work-flow, processes are defined and the steps required to deliver outcomes are also clearly specified. By definition the information required at each step is also determined. This approach defines the information required and the context necessary for its application, i.e. it resolves the information management problem. Information sharing: Activity objects are ideal for sharing information between other objects or for interfacing with other applications and mobile devices because they contain the required information, business rule and processing rules necessary to function. In addition, activity objects can address the error handling and exception handling. These self contained objects can be readily used and shared between mobile devices. Resolution of the ICT assembly line: Whether it is insurance premiums, foreign exchange trades or answering customer queries for product, business is about routine transaction processing and having the right information presented at the right time in the right way, i.e. ICT is the assembly line for knowledge workers. The activity object approach defines the knowledge worker assembly line across the desktop and mobile devices. Definition of Service Orientated Architecture: From a definition of the knowledge worker assembly line comes a resolution of the ICT required to support the assembly line, both hardware and software. How ICT is to be deployed and operated is also determined, i.e. a Service Orientated Architecture is defined that is driven as the customer interacts.
Business Driven Enterprise Architecture and Applications to Support Mobile Business
MObILIty, ActIVIty ObJEcts AND MODEL DrIVEN ArcHItEctUrE The Model Driven Architecture approach and the use of activity objects comes to the fore when it comes to mobile computing and the demand for mobile services (Lee et al. 2004). Many incumbent solutions with their inefficiency in accessing information and an absence of the unification of information with work-flow, become almost ineffective in servicing mobile devices. Cumbersome interfaces that do not deliver outcomes can not be used on mobile devices because of their screen size. Unlike the desktop PC, the storing of data locally on an end device is no longer an option. Complex business applications rich in features that people rarely use are not easily transferred to mobile devices (Paavilainen 2001). Simplicity, with clear interfaces that deliver outcomes is what works on the mobile device. The need for effective and efficient mobile interfaces shall in turn impact upon the incumbents in the desktop environment and create both a common interface across devices and an optimised supporting infrastructure. Other areas where the application of mobility leverages model driven architecture include: •
•
•
Information Relationships: To be effective, information needs to be delivered in context for decision making. Context is given by managing the relationships between information through a metadata framework that integrates with work-flow. Work-flow: Activity objects include their own work-flow and business logic but their strength comes in the combining of standard activity objects together to provide a consolidated work-flow. Messaging environment: Mobility requires a unified messaging environment combining voice, data, text, images and video. To guarantee service delivery the messaging environment needs to be archi-
•
•
•
tected around the FedEx model, where the quality of hand-off, the message and the delivery and storage of the message are all separated and utility infrastructure used. A business model driven architecture approach services the needs of a unified messaging environment. Error and exception handling: Effective and efficient transaction processing is all about how the errors and exceptions are managed. Whilst addressing the quality of hand-off issues is part of the solution, having dedicated processes to mange errors and exceptions is also required. Specific objects to handle errors and exceptions can be defined and called in sequence until the issues are resolved. Hand-off: Integration of information, integration between systems, integration of context and work-flow all rely on an effective quality of hand-off between different elements. Self contained objects with all of the required elements provide an effective solution for ensuring a quality of hand-off. Security: The topic of security is complex and is the subject of much concern and extensive discussion (Nand 2006). An object approach helps to facilitate the security because the necessary role-based access and functionality is included within the object. Additional security features and capabilities can be included as elements within the objects.
Mobility and the demand for mobile services is what will drive the development of object orientated solutions that shall manifest on both the desktop and in the enterprise. Delivering objects of functionality to a mobile device, as required with all of the necessary information, business logic and work-flow has several advantages: •
Across platform – A common interface with standard user driven functionality can
813
Business Driven Enterprise Architecture and Applications to Support Mobile Business
•
•
•
•
•
814
be delivered across devices and platforms. This provides the end user with what they need irrespective of which device they use and location in which they operate, i.e. “my desk where I want it and how I want it”. Consolidated information-bases – With the creation of virtual consolidated information-bases, information can be single sourced. Information is no longer trapped on the end device and all of the archiving and backup are conducted at the server. Smart end devices – The benefits of distributed computing power are realised without all of the complexities inherent to the desktop environment. The end device is a smart device. The benefits of lower cost, ease of use and better asset management are transparent. Information management simplification – The inefficiency of the desktop environment and the need for advanced information management skills to manage versions, locations, formats and applications is not sustainable on the mobile device. Information presented at the right time in the right way to any device anywhere anytime is required. The optimisation necessary for mobility shall also drive changes in the desktop environment. Defines ICT required – A business focused model driven architecture approach helps determine the business need, is driven by the business process, the information required is identified and integrated to work-flow. At this point the ICT solution necessary to support a business and how it should be deployed is transparent. Responsive to business need – Business is very dynamic and one of the existing challenges is trying to define requirements against a rapidly changing business environment. An object approach allows those objects required to be combined in real time to meet business needs. Since stan-
dardised objects can be used defined and applied across many processes and areas of business, application development becomes much more responsive to dynamic business needs.
FUtUrE DIrEctIONs Real time decision making and activity objects have many uses within both mobility and in the wider enterprise business application of ICT. One of the evolving needs is in the application of gaming solutions to business training, education and simulation. Gaming is one of the fastest growing services of mobile computing and whilst gaming offers many market opportunities in its own right, it is the wider business use from mobile devices that are to be realised. Flight simulators and their use has been widely applied to train and skill staff and to provide experience in handling difficult situations. The next level is to assist in business decision making through simulation and what-if scenarios. Strong visual presentation and a rich immersive and interactive experience to provide scenarios and identify issues and outcomes is a major area where model driven architecture and activity objects shall be used. The simulation and modelling results shall be accessible from mobile devices as well as the desktop. The provision of Software as a Service (SaaS) is in its infancy and the mobile opportunities for SaaS are still to be realised. Much of SaaS is still very application specific and there have been concerns about the quality and applicability of code supplied with SaaS. The activity object approach is the natural extension of SaaS; with only those activity objects required being rendered as a service. The development of mobile computing will see a proliferation of solutions and services and an incredible diversity of offerings. As the market matures, standards shall come into play and consolidation shall be seen. Standardisation
Business Driven Enterprise Architecture and Applications to Support Mobile Business
lower costs and guarantees service delivery and in turn this creates market dominance. Being the de-facto market standard is how a business gains market dominance. The software of the future is not feature rich applications but tightly integrated activity objects drawn together as needed to deliver outcomes. Being the de-facto standard for activity objects shall lead to standardisation of the desktop, at the enterprise, at the mobile device level and in turn, at the marketplace level (the emergence of marketplace computing).
cONcLUsION Mobile business and the demand for real time decision making is a powerful driver for aligning ICT and applications to deliver the ICT infrastructure necessary for mobile business. The use of a standard iconic interface and activity objects that invoke business functionality provides an effective solution for delivering mobile business services, whilst resolving the information management and work-flow issue and aligning enterprise architecture and application development.
Lee V., Schneider H., & Schell R. (2004). Mobile Applications: Architecture, Design, and Development. Hewlett-Packard Development Company L.P., publishing by Pearson Education as Prentice Hall Professional Technical Reference. McGovern, J., Ambler, S. W., Stevens, M. E., Linn, J., Sharan, V., & Jo, E. K. (2004). Foreword by O. Sims. In A Practical Guide to Enterprise Architecture. Pearson Education, Inc. Nand, S. (2006). Developing a Theory of Portable Public Key Infrastructure (PORTABLEPKI) for Mobile Business Security. In Unhelkar B. (Ed.), Chapter 27 of book: Handbook of Research in Mobile Business: Technical, Methodological and Social Perspectives., Hershey, PA, USA: IGI Global. Paavilainen J. (2001). Mobile business strategies: understanding the technologies and opportunities. Wireless Press; Addison-Wesley in partnership with IT Press. Sherringham, K. (2005). Cookbook for Market Dominance and Shareholder Value: Standardising the Roles of Knowledge Workers. London: Athena Press. (p. 90).
rEFErENcEs
Sherringham, K. (2008, Apirl/May). Catching the Mobility Wave Information Age. (p. 5).
Bowersox, D. J., Closs, D. J., & Cooper, M. B. (2007). Supply Chain Logistics Management, 2nd edition. McGraw-Hill Companies, Inc.: Irwin
Sherringham, K., & Unhelkar, B. (2008a). Elements for the Mobile Enablement of Business. In (Unhelkar et al. 2008) Handbook of Research in Mobile Business: Technical, Methodological and Social Perspectives. (pp. xxx –yyy).
Cummins, F. A. (2002). Enterprise Integration: An Architecture for Enterprise Application and Systems Integration. Canada: Willey Computing Publishing, John Wiley & Sons, Inc. Lan, Y., & Unhelkar, B. (2005). Global Enterprise Transitions: managing the process. Hershey, PA: IGI Global.
Sherringham, K., & Unhelkar, B. (2008b). Real Time Decision Making and Mobile Technologies. In (Unhelkar et al. 2008) Handbook of Research in Mobile Business: Technical, Methodological and Social Perspectives. (pp. xxx –yyy). Soley, R. & OMG Staff Strategy Group (2000). Model Driven Architecture. Object Management Group. White Paper.
815
Business Driven Enterprise Architecture and Applications to Support Mobile Business
KEy tErMs AND DEFINItIONs Activity Objects: A series of objects invoked from a standard iconic interface that provide business functionality because they contain the necessary content, images, business logic, processing rules, work-flow and presentation rules. FedEx Model: A model for the operation of a unified messaging environment based on the proven principles to move messages (parcels) around the world by leading logistics companies. Information Bases: The next generation of databases but instead of storing data, information is stored in context. Information Relationships: The associations between elements of information to provide context and convert information to knowledge. Knowledge Worker Assembly Line: Knowledge workers take information and value-add to it to provide services. ICT needs to provide the right information at the right time in the right way for knowledge workers to effectively operate, i.e. ICT is the assembly line for knowledge workers.
Marketplace Computing: Standardised computing (tightly integrated hardware and software) operating at the marketplace level to allow businesses to interact effectively in a marketplace. Business currently standardises at the enterprise level but to operate effectively, standardisation shall be at the marketplace level. Real Time Decision Making: The provision of information in context and integrated with work-flow in real time to any device anywhere anytime is needed so that decisions can be made.
ENDNOtE a
The introduction of the automatic telling machine (ATM) led many banks to redevelop their enterprise architectures and those that could not respond effectively were at a competitive disadvantage and in some selling off their retail operations.
This work was previously published in Handbook of Research in Mobile Business, Second Edition: Technical, Methodological and Social Perspectives, edited by Bhuvan Unhelkar, pp. 214-224, copyright 2009 by Information Science Reference (an imprint of IGI Global).
816
817
Chapter 3.17
Mobile Technologies Extending ERP Systems Dirk Werth Institute for Information Systems at German Research Centre for Artificial Intelligence, Germany Paul Makuch Institute for Information Systems at German Research Centre for Artificial Intelligence, Germany
AbstrAct
INtrODUctION
Nowadays the majority of enterprises use Enterprise Resource Planning (ERP) software to improve their business processes. Simultaneously, mobile technologies which can be used within ERP have gained further importance. This is because ERP, together with mobile technologies, offers a wide spectrum of synergies and both have a significant impact on enterprise efficiency. The improvement possibilities in ERP due to mobility range from sales activities, over logistic processes, up to effects on the human resource management.
Enterprise Resource Planning (ERP) systems have become the IT backbone of most enterprises. Several publications, articles and surveys mention that almost 70-80% of Fortune-1000 enterprises use ERP systems to improve their business processes. ERP systems have changed the way enterprises conduct their business as many functionalities, which only a few years ago had be done manually, now are automatically provided by the system. Similar to the Internet, technologies have largely grown in the last years. E.g. mobile phones have become a standard communication device in most countries. In the near future mobility and flex-
DOI: 10.4018/978-1-60566-156-8.ch041
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Mobile Technologies Extending ERP Systems
ibility will be a key issue that will enable organizations to withstand competition in an environment characterized by increasing cost pressures. Using mobile technologies for commercial purposes is one option that can certainly make business processes more efficient. This chapter discusses mobile business has opened new opportunities in ERP systems. Furthermore, we also discusses various other business processes that are influenced by mobile technologies (e.g. buying train tickets via the mobile phone). However, it should be noted that the influence of mobile technologies is not limited to consumer interactions. Also, established applications can be enriched by mobile technology resulting in new or improved functionalities. This chapter explores the impact of mobile technology on ERP systems and demonstrate some use cases where such technology can significantly improve ERP functionality.
ErP systEMs Enterprise Resource Planning Software offers a spectrum of activities which support enterprises to organize important business processes by providing multimodular software applications. ERP systems have evolved in the middle of the 1990s from manufacturing resource planning (MRPII) systems. Such systems aim to plan and steer the output generation within an enterprise. They comprise of all logistic activities, from the purchase planning and execution through the manufacturing planning, steering and supervision to the sales and after-sales activities. MRPII systems mainly cover the logistical view of the enterprise. Extending MRPII systems by human resource management and by financial management has resulted in ERP systems, that aim to cover all activities and business processes within an enterprise. Nowadays every business transaction can be monitored, analyzed and evaluated. The performance properties of ERP systems are:
818
• • • • •
Branch neutrality: ERP software is normally not aligned to a specific branch Operating efficiency: the special emphasis is placed on efficiency, not on technology Modularity: There are enclosed areas of activity within the software, called modules Integration: All business activities as an aggregate are continuously supported Standard software: ERP systems are not designed for individual purpose. In fact they are sold on an anonymous market, but of course they can be customized, i.e. adapted to fit customer needs.
ERP systems differ from each other in their complexity, range of functions and procurement costs. First it depends on the branch the enterprise operates using ERP technology. With an ascending number of suppliers or products, more modern warehouse systems or new distributions channels, the complexity of such a system increases. Second, the size of the enterprise including their whole network matters. Small and medium sized businesses don’t need the same range of functions like a worldwide operating multinational company. This certainly has an influence on the price of the software solutions, e.g. small sized businesses can install standard versions, whereas concerns need specially developed additional modules. Third, the number of users working with the ERP system plays an important role. The more accounts work simultaneously, the more powerful hardware is needed to guarantee an unproblematic process. Last, the technological base used to realize an ERP system, especially the database and the programming language, is a key factor determining the complexity and the range of functions. In the future, ERP systems will be increasingly standardized. Therefore the flexibility and mobility of user interfaces will play a more and more important role in order to generate additional advantages.
Mobile Technologies Extending ERP Systems
MObILE tEcHNOLOGy Mobile technology is newer than ERP systems. The first technological achievements covered the mobile speech transmission using analog mobile phones. At the end of the 1990s, the technology has broadened by two streams: On the one side the digitalization of mobile technology, on the other side the inclusion of text and data services. Besides, the mostly used service is the short massage service (SMS), originally developed for usage in the global systems for mobile communications (GSM). Today, several kinds of mobile devices are available on the market. It ranges from simple GSM mobile phones, over ultramodern personal digital assistants (PDA) connected via the universal mobile telecommunications standard (UMTS), up to radio frequency identification (RFID) tags which can simplify warehouse processes. By using portable terminals and mobile data transfer technology, users establish a connection to wireless firm-owned network services. They are locally and temporally independent and always available. As a result they are able to make transactions from almost every place on the earth. Additionally, portable terminals are easier to operate and have a shorter boot time than locally installed user interfaces due to the fact that decentralized devices normally only include the essential range of functions. As far as security is concerned the software or the hardware normally includes personal identification processes, e.g. a subscriber identity module (SIM) card or password protection. This is necessary to ensure that third parties are not able to enter the network and see or manipulate enterprise information. As each network user has its own mobile device and the corresponding account, personalization possibilities are nearly unlimited, i.e. it is possible to define different views and accesses to the central database. Personalization allows users to work more effective because everyone is allowed to individually determine its preferred properties. This also leads to more cost-effectiveness. In
addition, the costs for mobile devices like PDAs are continuously decreasing and far away from comparable notebook prices. In terms of realizing saving potentials the localizability aspect probably plays the most important role. For example RFID tags allow enterprises to improve their logistic activities (see point “Logistic improvements”). In the next sections we will discuss potential advantages by presenting a use case to each improvement field.
sALEs IMPrOVEMENt, cOst rEDUctION Sales can be increased by using mobile technology in order to extend the functionality of ERP systems. In order to get a better idea of the improvement potentials we take a look at the example of the “traveling salesman”. His main fields of activity are customer acquisition, providing information to customer, product sales and their ordering and after-sale activities. These tasks are normally supported by the ERP system, e.g. new customers have to be set up with their individual customer number. All customer orders, including their content, volume and value are registered and can be tracked up on the basis of a voucher. With a mobile device this information can be entered or accessed wherever the salesman, respectively the customer is located. Furthermore, constantly updated price lists can be presented and products can be ordered online, including such special services like delivery time determination. This allows both to increase customer satisfaction and business efficiency. On the one side the enterprise is able to respond fluidly to changing conditions in customer demand as the salesman is always directly linked to the development and purchase department via mobile technology. Thus production can be flexibly adapted and it is not necessary to estimate potential sales figures, i.e. the risk of wasted production capabilities as a result of non-saleable goods is significantly reduced. On
819
Mobile Technologies Extending ERP Systems
the other side the complexity of the ERP system is decoupled. Only the functions needed for salesman’s activities are supported by the mobile device, unnecessary and complex features of the system are removed, respectively not available. By embedding automatic synchronize and update functions the employee spends a minimum of its working time on administrating the ERP system.
LOGIstIcs IMPrOVEMENt With regard to cost efficiency ERP supported enterprise warehouse systems gain in importance. In the near future the new generation of the radio frequency identification (RFID) standard will help to save expenses. The RFID concept is based on contactless data transmission by electromagnetic alternating fields [Hertel J., Zentes J., SchrammKlein, H. (2005)]. Special RFID tags serve as data volumes and allow the reading, processing and changing of chip contained information. The main application possibilities are quiet varied [ECR-D-A-CH (2003); Füßler (2004)]: •
•
820
Production: After production goods are individually equipped with an RDIF tag which allows identifying their position at every step of the supply chain. Stocks monitoring: RFID technology allows to trace the receipts of goods, the warehouse process itself, and outgoing goods. As periodically recurring inventory processes always retain a lot of employees, mobile devices in correspondence with the RFID technology can help to make counting operations more efficient, i.e. easier and faster. All stocks can be counted by scanning the RFID tags with a mobile scanner unit, e.g. a modern PDA extended by a radio frequency receiver. Especially with regard to homogeneous stocks, documentation operations can be accelerated.
•
After sales activities: Even after the selling process, the RFID tags remain on the products and can be used for automatic replenishment, reclamation or exchange procedures and after-sales disposal activities.
Due to relatively high costs of the needed transponder technology RFID was only used in big business logistics, e.g. in container handling facilities. As costs of the obligatory hardware are decreasing the usage of the technology becomes more and more efficient for other purposes. Pointof-Sales will use RFID in order to accelerate sale activities or to reduce consignment and personnel costs. E.g. a supermarket can save costs by providing fully automatic cash desks. It is not necessary to scan each product a consumer wants to buy, not even a visual contact has to be established, as several RFID data carriers can be ascertained within one single read operation. All goods within the customer’s shopping carriage are identified by driving through a scanner unit.
HUMAN rEsOUrcE IMPrOVEMENt In this section we discuss improvement potentials for human resources (HR) by looking at the accounting of travel expenses. ERP systems also support automatic note of expenses. If an employee comes back from a business travel and wants its travelling expenses to be reimbursed, the system only needs the payment vouchers, the employee number and the release signal for clearing the payment. With mobile technology this operation can be accelerated once again. Contemporary to the prepayment of the employee, the vouchers can be digitally submitted to the office, the system validates the sums and pays the bill, e.g. paying bus and train ticket via mobile phone. Since April 2007 there is a pilot project with twelve participating German cities, providing such a service to their citizens. The system is nationally standardized and was developed by member firms and groups
Mobile Technologies Extending ERP Systems
of the Association of German Transport Companies (VDV), Siemens IT Solutions and Services, DVB LogPay and the Frauenhofer-Institut IVI Dresden. After completing a one-time registration and selecting a preferred payment method, users receive a text message containing a Java application element that is used for ordering the tickets. The mobile phone screen allows the user to enter the type and value of the ticket, whereby single tickets and day passes are available. After the payment process the mobile phone owner receives an on-screen confirmation serving as a receipt, e.g. for ticket inspection [Soft32 (2007)]. This could be extended by directly sending a voucher to the company’s ERP system. Obviously this development reduces personnel cost and saves administration time because no paper documentation is needed.
rEFErENcEs
cONcLUsION
KEy tErMs AND DEFINItIONs
This chapter explored new ways of enriching standardized ERP systems with modern mobile technology. Mobile components or devices enable enterprises to make their business processes more efficient. Via wireless networks employees are permanently linked to the ERP system, are able to work online and can use the saved time for more productive activities. The cases we presented above give a general overview of possible new business opportunities if ERP systems are used in connection with mobile devices. They will have a significant influence on future activities within an enterprise. With increasing technological performance of mobile devices the structure of an ERP system will be changing from a centralized main system into a network consisting of independently operating and interlinked mobile devices. The discussion in this chapter can be further enhanced by subjecting it to research validation that is currently outside the scope of this chapter.
Business Process: A target-oriented, logical sequence of activities which can be performed by multiple collaborating organisational units by using information and communication technologies. This system of functions makes a substantial contribution to the generation of added value. Enterprise Resource Planning (ERP) Systems: Integrated packages of standardized software applications supporting the resource planning of an enterprise. Financial, logistical and human resource related business processes can be improved by using an ERP system. Mobile Business: Describes the initiation and the entire support, execution and maintenance of business transaction between business partners by the use of wireless electronic network communication technology and mobile devices. Mobile Business Processes: Integrate mobile solutions into classic business processes. Mobile work leads to new collaborative opportunities, improves the enterprise workflow and enables the transaction of digital business processes.
ECR D-A-CH. (2003). RFID – Optimierung der Value Chain. Köln. Füßler, A. (2004). Auswirkungen der RFID-Technologie auf die Gestaltung der Versorgungskette. In J. Zentes, H. Biesiada, H. Schramm-Klein (Hrsg.), Performance-Leadership im Handel. Frankfurt a.M., (pp. 137-155). Hertel, J., Zentes, J., & Schramm-Klein, H. (2005). Supply Chain Management und Warenwirtschaftssysteme im Handel. Heidelberg: Springer (pp.207-210). Soft32 (2007). http://news.soft32.com/busand-train-tickets-via-mobile-phone-in-munstergermany_5232.html.
821
Mobile Technologies Extending ERP Systems
Mobile ERP: Solutions extend traditional ERP systems by location-independently collecting and exchanging data via mobile devices and wireless transfer mechanisms. Standardized interfaces allow a direct and steady connection to the ERP hardware and lead to more flexible and efficient business processes within an enterprise. Radio Frequency Identification (RFID) System: Allows contactless data transmission by electromagnetic alternating fields and is often used for automatic identification and data acquisition.
Sensory ERP: A concept for next generation ERP systems. It enables the ERP system to automatically acquire data and supervise enterprise states and events by using sensors (e.g. RFID tags and gates, GPS tracker, etc.). Interfacing between the physical world and the ERP system is no longer performed through human workers, but this data is collected by sensors that directly assess the physical states and that are part of the real world itself. By this, the error rate significantly decreases and business processes become more efficient.
This work was previously published in Handbook of Research in Mobile Business, Second Edition: Technical, Methodological and Social Perspectives, edited by Bhuvan Unhelkar, pp. 440-444, copyright 2009 by Information Science Reference (an imprint of IGI Global).
822
823
Chapter 3.18
Convergence in Mobile Internet with Service Oriented Architecture and Its Value to Business Marco Garito Digital Business, Italy
AbstrAct The word “convergence” refers to the combination of fixed and mobile communication, a situation where a private or business user can take advantage of being constantly connected and be able to retrieve applications and data by swapping device, with the limitations that a mobile device may have such as smaller screen and keyboard, reduced storage capability, and limited power provided by batteries. Convergence can also include imagining how mobile technology can be a component of everyday items and how data, applications, and services can be delivered via the network infrastructure. This chapter aims to cover RFID technology, Bar code and Service-Oriented DOI: 10.4018/978-1-60566-156-8.ch054
Architecture (SOA): the first two technologies are dealt with in parallel to provide an overall view of advantages and disadvantages, while SOA will be part of a distinct discussion and analysis. Eventually, some practical examples of these discussed technologies are provided.
INtrODUctION RFID and Bar code are two emerging mobile technologies able to provide competitive strategic advantage to business when properly deployed and implemented. As extension of current fixed network infrastructure, the coordination with existing business processes, department and organizational structure are an essential part of rewarding implementation. The development of
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Convergence in Mobile Internet with Service Oriented Architecture and Its Value to Business
SOA environment can further enhance the capability and possible outcome of RFID and Bar code. The chapter outlines advantages and disadvantages for both of them, providing examples of how they co-exist and how they can create value.
RFID in different situations, according to their technical features The advantages of RFID can be classified as follows (Lahiri, 2006; Garfinkel, Rosenberg 2006; Ascential 2005):
rFID and bar code
•
Radio frequency identification (RFID) is one of the most interesting technologies today: its use impacts a large number of protagonists in private and business environment but it also raises simple and dramatic issues in legal, social and political affairs. RFID have histories back to 1930 and 1940, when the British Army, during WW 2 pioneered RFID to identify their own aircraft returning home from bombing Europe. Early radar systems could spot an incoming airplane but not its type (Lahiri, 2006; Garfinkel, Rosenberg 2006; Ascential 2005) RFID uses radio waves to detect physical items (both living and inanimate) and therefore the range of identifiable object includes everything and everywhere: RFID is an example of automatic identification technology through which an item is automatically detected and classified. Bar code, biometric, voice identification and optical character recognition systems are other example of automatic identification. The RFID environment consists of a set of mandatory and optional components: Mandatory parts are: tag, reader, antenna controller. Sensor, actuator, host and software system and communication infrastructure are the optional parts. The table below describes the types and usage of
•
Figure 1. Type and usage of RFID
824
• • • • • •
RFID tag can be read without any physical contact between the tag and the reader The data of RFID can be rewritten several times with no diminishing quality or integrity A line of sight is not required for an RFID reader to read a tag The range can vary from few centimetres to some metres Storage capability of a tag is unlimited A reader can read different tags within its reach for a limited time A tag can be structured to perform unlimited duties The data quality is 100% guaranteed
RFID has its limitation that can be summarized as it follows •
• • • •
RFID do not work well or not work at all with RF-opaque items or RF-absorbent items Surrounding conditions may affect performance There is a limit for how many tags can be read within a time slot Hardware set up may limit performance The technology is still immature
Convergence in Mobile Internet with Service Oriented Architecture and Its Value to Business
Bar code is a scheme representing textual information; the symbols are generally vertical lines, spaces, squares and dots: the bar code is probably the newest technology as the first patent was issued in 1949 and the first application of bar code technology was a rail car tracking system, implemented in 1960 (Lahiri, 2006; Garfinkel, Rosenberg 2006; Ascential 2005). The method encoding letters and numbers using these elements is defined as symbology, which has the following characteristics: • • •
Symbology with better encoding technique leads to error free and efficient encoding: Better character density can represent more information per unit physical area Better error checking capability enables data reading even in those cases where the some components are damaged or missing
It is possible to have three different categories of symbols: linear, with vertical lines with different spaces with the white spaces separating two adjacent lines, with an overall maximum number of 50 characters; two dimensional with high storage
of data capacity, up to 3750 number of characters; Three dimensional which is a linear bar code integrated in a surface (Lahiri, 2006; Garfinkel, Rosenberg 2006; Ascential 2005) Bar codes are read by scanners which flash a light through the bar code area: during this process, the scanner measures the intensity of the light reflected by the white and dark area of the bar code: the dark area absorbs light, the white area reflect back the light. The light pattern is captured and translated by a photodiode or photocell into an electric signal which is again converted into digital data, represented as ASCII characters. This same data has been incorporated within the bar code at the origin (Lahiri, 2006; Garfinkel, Rosenberg 2006; Ascential 2005). The advantages are (Lahiri, 2006; Garfinkel, Rosenberg 2006; Ascential 2005): • • • • •
Rapid and accurate data collection; Increased operation efficiency; Reduced operation costs; The shortcomings are: Bar code can be easily damaged
Figure 2. Comparison table between RFID and bar code
825
Convergence in Mobile Internet with Service Oriented Architecture and Its Value to Business
• • •
Reader efficiency can be affected by environment conditions Presence of obstacles does not allow the scanner to read the bar code Speeded items do not allow the scanner working properly
The following Table 2 summarizes advantages and disadvantages in RFID and Bar code all by listing specific and relevant characteristics, based on the considerations made above However both RFID and Bar code are not immune of common disadvantages and the table below provides an overview The situation described above enables to conclude that a replacement or takeover of RFID technology against Bar code is unlikely to happen: RFID is still an immature and developing technology compared to a widely consolidated and used Bar code. It is possible to see many areas of improvement for both of them based on the analysis done so far and at the moment the logical conclusion is that RFID and Bar code can co-exist and many of the applications for RFID can be located out of the reach of current Bar code scenario where business needs require something else or other than Bar Code (Lahiri, 2006; Garfinkel, Rosenberg 2006; Ascential 2005).
Figure 3. Main disadvantages for RFID and bar code
826
rFID and bar code Application; An Example In travel Industry RFID and Bar Codes coexist though with mobile devices. In a joint initiative Finnair and Nokia have implemented a new initiative to manage ground-based staff: work assignment are directly transmitted via mobile so employees can directly tackle their job tasks and when they have finished, their mobile device can read the RFID tags located at each key point locations and this data is routed to the central management database. Again Finnair and TDC mobile are piloting a test to use mobile device to check in: the initiative is called “Mobile bar code boarding pass”. After buying the ticket, passengers check in by mobile, on the internet or at the check-in desk and they receive a message with a 2-dimension bar code on their mobile phone; once at the airport, passengers retrieve the message and scan it for checking luggage, security screening and when boarding the aircraft: no further paper-based boarding card is needed (Toro, 2007)
rFID Applications Examples Mobil introduced a “Speedpass” System in 1997 to fast track the payment to their petrol stations network: the system was later extended to convenience stores of Mobil. In 2001, the company augmented their Speedpass services to McDonald’s, in 2004 Mobil entered an agreement with Stop & Shops stores to test whether or not
Convergence in Mobile Internet with Service Oriented Architecture and Its Value to Business
Mobil’s customers could buy their grocery and food.: most likely, car drivers on the move do not have enough time, to dedicate to shopping task and the speed concept underneath demonstrates how mobile technology can enhance traditional shopping experience in everyday life (Lahiri 2006, Garfinkel and Rosenberg 2006). Another example of RFID use can be retrieved in hospitals where three types of RFID can be identified: those to track people and items around the premises, those to safeguard use of medical equipment and those to assist medical staff in their daily job. Different situations but each of them can interact providing a unique view of hospital’s environment. RFID can provide the needed granularity to properly asses moments, people or objects, even at home, when the need is to monitor the wellbeing of elderly people or children needing special care and attention (Lahiri 2006, Garfinkel, Rosenberg 2006).. Still an example in retail industry, RFID and tag technology play an important role in asset protection in not food products: expensive items and items that can be easily hidden by shoplifters are protected by visible or invisible tags that must be removed at the check out points: in this case, RFID and Bar code technology can coexists because they cover two different needs. Once the tag has been removed (so the customer can pass through the exit grid without no alarm being activated), the same item can be scanned for payment process: eventually, the recording of the transaction is transferred to the back office enabling the marketing analysis described
bar code Applications Examples The largest deployment of Bar code technology happened in retail industry: the major retail chains (Wal-Mart in US, Tesco, Asda and M&S in UK for example, but list could continue) have recently adopted their stores with self check-out points or kiosk, enabling thus customer to scan their shopping products, pay by cash or credit/debit card,
get a cash back and even utilize fidelity vouchers in view of current or future purchases. This is also an extremely powerful way to capture data for marketing research and analysis because while scanning the product, the information is automatically transmitted to the central database which can thus calculate trends and attractiveness of the same product, the other products bought in combination with it and the success of a promotion campaign with real time information transmitted directly to the back office of the store chain to organize replenishment in a timely manner. Therefore, self service check-out kiosks is not just a way to reduce the number of cashiers and reduce the space. The availability of complex and aggregated data about products in retail industry, within a bar code also play an important role in health and safety in food and many regulations around the world prescribe very specific and rigid rules for fresh and perishable food products. Each store can retrieve on daily basis the use by date food products and pull out each item from the regular shelf space or organize a dedicate space where these same items can be sold at a reduced price. This happens in many stores around London area: as the risk for the retail shop is to throw away food and therefore asset, it is a better off to try to sell as much as possible, even at a reduced price, relying on the fact that customers may be more tempted to buy items when their price is lower than usual for immediate consumption at home. There are some though some concerns about privacy and more broadly pervasive computing because these are emerging technologies far from being stable yet: a tag reader can detect and collect information from any belongings, document or shop items bought elsewhere (the ringing alarm at the exit gate grids many people experience is an example). The main deployment of tagging and RFID is in retailer and supply chain environment therefore privacy is not an issue in such a circumstance; however when this technology is applied and attached to consumer good at item level.
827
Convergence in Mobile Internet with Service Oriented Architecture and Its Value to Business
It is now time to introduce the Service Oriented Architecture concept to analyze how wireless technologies can be deployed in a dynamic and user oriented way
service Oriented Architecture (sOA) Service oriented architecture can be defined as a method to conceive, implement and distribute business functions, data or applications, based on geography or across enterprise, enabling the reconfiguration of new business processed when necessary; there are some key points which must be taken into account (Sonic 2006; Cisco 2004 and 2005; Plumtree 2005, Sprott 2004, Symons 2005): • • •
•
SOA is based on WWW standards The services or, more precisely, the content of SOA allows business flexibility SOA can incorporate best practice to create design aimed to develop and enhance business processes SOA covers existing system (Sonic 2006; Cisco 2004 and 2005; Plumtree 2005, Sprott 2004, Symons 2005)
The combination of these characteristics demonstrates and confirm the fit between the RFID/Bar codes described below and the realization and distribution of services, applications or content through SOA: the difficulties and the challenges brought in by new technology development dictates to implement a model to facilitate the evolution of business process and supporting technology (Sonic 2006; Cisco 2005; Plumtree 2005, Ascential 2005). The table below provides a practical example about the moment in business development terms, in which mobile technologies can be adopted. The key for a successful adoption and distribution of mobile technology based on SOA infrastructure is an evolving and continuous alignment between Business and Technology and the capability of IT to properly support Business
828
to respond to competitive market and rapidly develop new solutions and at the same time change business models by gathering real time information through sensitive applications and services (Sonic 2006; Cisco 2005; Plumtree 2005, Symons 2005). The table 4 if properly understood and implemented, opens the door to the semantic web, where content and applications are exchanged, captured and analyzed real time, enabling thus to adopt and deliver sensitive answers to business needs. RFID and Bar code do this by capturing data from the end point and transferring it back to the source, at the opposite end, to eventually change the characteristics of the original data itself It is now necessary to move onto the next step and possibly describe the IT and Business architecture Today’s enterprises require a new IT strategy, one that will improve their ability to respond to competitive pressures and market demands.
From sOA to service Oriented Network Architecture The emerging solution takes advantage of a more flexible, adaptive, and feature-rich IT architecture: Service-Oriented Network Architecture. It helps enterprises evolve their existing infrastructure into an Intelligent Information Network (IIN) that supports new IT strategies, including serviceoriented architecture (SOA), Web services, and virtualization and mobile. By integrating advanced capabilities enabled by intelligent networks, enterprises reduce complexity and management costs, enhance system resiliency and flexibility, and improve usage and efficiency of networked assets. It allows enterprises to use their network as a strategic asset that properly aligns IT resources with Business priorities. The result is lower total cost of ownership (TCO) and increased revenue, which over time enable organizations to shift an increasing proportion of their IT budgets toward strategic investment and business innovation
Convergence in Mobile Internet with Service Oriented Architecture and Its Value to Business
Figure 4. Adoption of mobile technology
(Sonic 2006; Cisco 2004 and 2005; Plumtree 2005, Sprott 2004, Symons 2005). Service oriented networks architectures are based on a three-layer design •
•
Application layer: this layer includes all software used by end users within the enterprise for business purposes (such as enterprise resource planning and customer relationship management) and software used for collaboration (for example, unified messaging and conferencing). Networked infrastructure layer: this layer interconnects devices at critical points in the network (campus, data center, network edge, metropolitan-area network [MAN], WAN, branch offices, and tele-worker locations) and facilitates transport of services and applications throughout the enterprise.
•
Interactive services layer: this layer optimizes communications between applications and services in the application layer by taking advantage of intelligent network functions such as embedded security, identity, and quality-of-service (QoS) features.
The 7 OSI (Open System Interconnection) layers, described in Figure 5, provide the ground for understanding where the spoken mobile technologies can be successfully implemented (Simeneau 2005, White 2001). Since tags store information, which may include location data, not all intelligence need be held in corporate networks and enterprise systems. Exchange of information may be restricted to tags and readers may be processed by a local server via a LAN, or aggregated and passed on to a distribution centre. Many organizations are starting with RFID pilots within the confines of their own environ-
829
Convergence in Mobile Internet with Service Oriented Architecture and Its Value to Business
Figure 5. OSI 7 layers
ment, as localized internal deployments. The reason for this is simple: it restricts the scope to a manageable limit, allowing learning to occur in a controlled manner. But most organizations implement RFID to facilitate supply chain efficiencies, so extending RFID beyond the organization is inevitable. Even so, there are some small steps that an organization can take in venturing outside their own environment (Sonic 2006; Cisco 2005; Plumtree 2005, Brown, Wiggers 2005, Sprott 2005).
creation of Value the creation of value using RFID (as well as any other mobile technology) can be summed up by analyzing the Figure 6. The circled areas describe the main development steps from an internally targeted phase (internal deployment) to a more complex extracompany connected enterprise encompassing many businesses: this evolutionary process implies two main issues: •
•
A growing sophistication of solutions and intensive use of the network infrastructure (vertical ax) An increasing density of the device which probably means a wider adoption of open source software to make easier the coding and connection between heterogeneous devices and systems, particularly when the need is to link enterprises (horizontal ax)
IT is still and enabler of efficiency and effectiveness (Sonic 2006; Cisco 2005; Plumtree 2005): as long as the development environment becomes more and more demanding, Business Figure 6. Creation of Value by using RFID
830
Convergence in Mobile Internet with Service Oriented Architecture and Its Value to Business
needs to step in and take the lead to properly support the roll out of the deployment by facilitating the required change in process and organizational structure and capturing inputs from the market and to translate into competitive advantage (Lahiri, 2006; Garfinkel, Rosenberg 2006; Ascential 2005) This means having a clear understanding of priorities and issues of RFID deployment and the table below describes them all based on the deployment phase, with the left side being the starting point Over the years, computer systems and networks enabled the tracking of product across the supply chain to make sure that items departing from point A could get point B: in a retail environment, this is a crucial step to replenish shelves on time and based on customers’ demand. The acid test is given by when the shopper decides which product to select off the shelf and when the shopper decide to buy it again after being satisfied by the product itself (Lahiri 2006, Garfinkel, Rosenberg 2006, Cisco 2004 Brown and Wiggers 2005). The analysis carried out on previous paragraphs allow us to state the case for SOA as suitable vehicle for a relatively secure and fast deployment of the spoken mobile technologies: SOA and service oriented network architecture environment is a flexible enough infrastructure to host mobile devices which can now interact with other existing applications and systems, both in back end or front end side of the company’s IT and Business structure. The ever growing complexity and need of accurate and timely information about process and market data requires the availability of easy to use and portable devices. Mobile technology can be successfully implemented as extension of Supply Chain Management, Customer Relationship Management, business-to-employees for safety and security monitoring and the workflow of real time data and information may be transferred to Enterprise Resource Planning and Business Intelligence and more broadly to the whole organization and externally to Business Partners (Kalakota,
Robinson 2000) and it is possible to understand how SOA environment can additionally enhance the capability with flexibility, speed of implementation, standardization of protocols and business practices, reducing the uncertainty of a relatively new wireless technology. From an end user point of view, this scenario can be considered as an example of MIMO (Multiple Inputs Multiple Output) where companies are able to capture and assess relevant business information about processed and markets for a wider and heterogeneous set of sources at the very same time (Lahiri, 2006; Garfinkel, Rosenberg 2006; Ascential 2005, Cisco 2004 and 2005, Sprott 2004)
Mobile business, sOA and semantic Web Mobile technologies introduce advantages that cannot be obtained with fixed connectivity: these include localization and personalization. Previous paragraphs demonstrate this is valid for RFID and Bar codes because these allow the delivery of customized and customizable information and data to user both on local or remote role (Coyle 2004; Sheshagiri, Sadeh, Gandon 2004) The Semantic Web is an initiative supported by W3C aimed to support semantic meaning and context for internet resources: in such an environment, web services are an enabler for applications which can communicate with other automatically via the internet (both fixed and wireless) using standard internet protocol. It has been demonstrated and explained that such an opportunity also exists for RFID and Bar code and SOA can be the delivery platform for these protocols. (Coyle 2004; Sheshagiri, Sadeh,Gandon 2004) All together these technologies open a completely new scenario for mobile computing, especially because device capability and sophistication is increasing and new enhanced devices reach the market at a fast pace. The wireless network therefore extend the richness and reach of the traditional web, which means that there is a need
831
Convergence in Mobile Internet with Service Oriented Architecture and Its Value to Business
Figure 7. Understanding priorities in RFID
to add meaning to the data that’s is generated and delivered; the broader scope of semantic web is hence to support the mapping of current and future system, protecting the universality of the web with the localized scenario. Once again, a scenario that fits with RFID and Bar code/ (Coyle 2004; Sheshagiri, Sadeh,Gandon 2004)
Mobile semantic Web @ Work: Mspace MSpace Mobile is a Semantic Web application enabling people explore the world around them by leveraging contexts that are meaningful to them in time, space and subject (Mulholland, Collins, Zdrahal 2005, Wilson, Russell, Smith, Owens and Schaefel 2005). People unfamiliar with any city, Sydney for example, may find their physical location to be the main context around which they wish all other requests to circulate (Mulholland, Collins and Zdrahal 2005, Wilson, Russell, Smith, Owens and Schaefel 2005). Query like “which are main attractions at Circular Quay and Sydney CBD I’d like to see during the weekend within a walking distance from Sydney Tower” can be gathered in the “Selection” field of the diagram below, which contains the main parameters: Sydney, CBD and Circular Quay area, Sydney tower, weekend events” The next step is provided by the “Organization” process which collects the available events on display during any given weekend: for example,
832
the market at the Rocks area, a concert at Opera House and the Mardi’ Gras Parade. With the “Exploration” step, it is possible, for example to view some pictures or the places, the map to get to each of them and the timetable for the events and eventually restart the loop to shortlist and schedule the visit according to user’s own preferences. Location Based Services and disseminated applications across multiple infrastructures and converging fixed/mobile hubs and devices is another confirmation of the suitability of SOA and mobile semantic web as proper environment enabling and delivering enriching user experience: in a business-to-business standpoint, more relevant for this paper, the same comprehensive view of data and information can be achieved by integrating RFID and Bar codes and replacing the deFigure 8. Workflow for Mspace
Convergence in Mobile Internet with Service Oriented Architecture and Its Value to Business
scribed tourist scenario with a proper business environment.
cONcLUsION It has been demonstrated how mobile technology can successfully benefit the business when IT and Business align toward a clear understanding of targets and when companies are able to capitalize on market input. It has been proven, also, how innovative and “off the shelf” mind set can create real changing experience for customers and companies. It has also been demonstrated how a service oriented (network) architecture is one of the best hubs of business requirements and technology features to deliver applications and contents through mobile devices. This technology can deliver a unique view of processes, data and end standpoint in several environments As the technology is still in its early stage, its potential is not well understood in each aspect and privacy is one of the most important. Once mobile applications are layered on network infrastructure with shared programs and features, they are part of the internet and exposed to its risk and threats.
rEFErENcEs A new service oriented architecture (SOA) maturity model (2006). Sonic Software Corporation. http://www.sonicsoftware.com/solutions/service_oriented_architecture/soa_maturity_model/ index.ssp Brown, D., & Wiggers, E. (2005). Planning for proliferation. The impact of RFID on the network. http://newsroom.cisco.com/dlls/2005/Whitepaper_031105.pdf
Business overview of service oriented service network architecture (2005). Cisco Systems. http:// www.cisco.com/en/US/netsol/ns477/networking_solutions_white_paper0900aecd803efff3. shtml Cisco Internet Business Solution Group. (2004). “2010” The retail roadmap for chief executives. Cisco Systems US. http://newsroom.cisco.com/ dlls/2004/ts_011204.html Cisco Systems. (2005). Business overview of service oriented network architecture.http://www. cisco.com/sona Coyle, F. (2004). Mobile computing, web services and the Semantic Web: Opportunities for mcommerce. Computer Science department, School of engineering, Southern Methodist University, Dallas, Texas. Dubey, A., & Wagle, D. (2007). Delivering software as a service. The McKinsey Quarterly. http://www.mckinseyquarterly.com/article_abstract_visitor.aspx?ar=2006&L2=4&L3=43&sr id=17&gp=0 Garfinkel, S., & Rosenberg, B. (2006). RFID. Applications, security and privacy. In S. Lahiri RFID Sourcebook. New Jersey: Pearson Education. Kalakota, R., & Robinson, M. (2000). E-Business 2.0: Roadmap for Success. Addison Wesley. Mulholland, P., Collins, T., & Zdrahal, Z. (2005). Bletchley Park Text: Using mobile and semantic web technologies to support the post-visit use of online museum resources. UK: Knowledge Media Institute, The Open University. Retail and consumer goods – RFID deployment (2005). Ascential Company – white paper, US. http://knowledgestorm.fastcompany.com/fastco/ search/keyword/RFID+TAG/RFID+TAG
833
Convergence in Mobile Internet with Service Oriented Architecture and Its Value to Business
Sheshagiri, M., Sadeh, N. M., & Gandon, F. (2004). Using Semantic Web services for context aware mobile awareness. Mobile Commerce laboratory, School of Computer Science, Carnegie Mellon University. Simeneau, P. (2005). The OSI model: Understanding the seven layers of computer network. Global Knowledge LLS. Sprott, D. (2004). Service oriented architecture: An introduction for managers. CBDI Forum, Ireland. http://www.cbdiforum.com/report_summary. php3?topic_id=20&report=709&order=member_ type&start_rec=0 Strategic decision on SOA (2005). Plumtree Software Inc. LLP while paper US http://www. plumtree.com Symons, C. (2005). IT strategy maps. A tool for strategic alignment. Forrester Research US. http://www.forrester.com/Research/Document/ Excerpt/0,7211,38215,00.html Toro, R. (2007). Bon voyage. W3 IBM com corporate. White, W. S. (2001). Enabling eBusiness integrating technologies, architectures and applications. Wiley. Wilson, M., Russell, A., Smith, D. A., Owens, A., & Schaefel, M. C. (2005). mSpace Mobile: A Mobile Application for the Semantic Web. IAM Research Group School of Electronics and Computer Science University of Southampton.
KEy tErMs AND DEFINItIONs Bar Code (also barcode): A machine-readable representation of information (usually dark ink on a light background to create high and low reflectance which is converted to 1s and 0s). Open Systems Interconnection Basic Reference Model (OSI Reference Model or OSI Model): An abstract description for layered communications and computer network protocol design. Radio-Frequency Identification (RFID): An automatic identification method, relying on storing and remotely retrieving data using devices called RFID tags or transponders. Semantic Web: An evolving extension of the World Wide Web in which the semantics of information and services on the web is defined, making it possible for the web to understand and satisfy the requests of people and machines to use the web content. Service-Oriented Architecture (SOA): A software architecture where functionality is grouped around business processes and packaged as interoperable services. Service Oriented Network Architecture: Emerging technology based on network infrastructure particular set up which is similar to Service oriented architecture but based on a different approach.
This work was previously published in Handbook of Research in Mobile Business, Second Edition: Technical, Methodological and Social Perspectives, edited by Bhuvan Unhelkar, pp. 584-594, copyright 2009 by Information Science Reference (an imprint of IGI Global).
834
835
Chapter 3.19
Enterprise Specific BPM Languages and Tools Steen Brahe Danske Bank, Denmark
AbstrAct
INtrODUctION
Many enterprises use their own domain concepts when they model business processes. They may also use technology in specialized ways when they implement the business processes in a Business Process Management (BPM) system. In contrast, BPM tools often provide a standard business process modeling language, a standard implementation technology and a fixed transformation that may generate the implementation from the model. This makes the tools inflexible and difficult to use. This chapter presents another approach. It applies the basic model driven development principles of direct representation and automation to BPM tools through a tool experiment in Danske Bank. We develop BPM tools that capture Danske Banks specific modeling concepts and use of technology and which automate the generation of code. An empirical evaluation reveals remarkable improvements in development productivity and code quality. We conclude that BPM tools should provide flexibility to allow customization to the specific needs of an enterprise.
Business Process Management (BPM) is currently receiving much focus from the industry. Top management demands to understand and control their business processes and agility to adjust them when market conditions change. This can be achieved through Process Aware Information Systems (Dumas et al.,2005). A BPM system (Jablonski and Bussler,1996; Leymann and Roller, 2000) is one example of such a system. It allows execution and automation of a business process that can be described explicitly as an executable workflow. Although the hype about BPM and process automation is high, previous work has shown that it is relatively complex to understand, model and implement a business process as an executable workflow (Brahe, 2007). First the process must be understood, second it must be formalized and modeled at a highly conceptual and logical level, and third the process design must be transferred to technology. Many software vendors have complete BPM tool suites for modeling and implementing business processes. Such tools are mostly based on a predefined process modeling language like
DOI: 10.4018/978-1-60566-669-3.ch002
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Enterprise Specific BPM Languages and Tools
the Business Process Modeling Notation (BPMN) (White, 2006) for capturing the business process at the conceptual level and one technology like the Business Process Execution language (BPEL) (BPEL, 2003) for implementing the process. These tools also assume a fixed development process where only two models exist, i.e. the conceptual business process and the implementation. Using such tools causes two challenges for an enterprise that has specific requirements to its development process, uses its own modeling concepts and uses technology in specialized ways; First, a standardized modeling notation does not allow users to use domain concepts and may contain too many modeling constructs which makes the tool difficult to use. The models may also be difficult to understand and use as a communication media. Second, transformation of a model into implementation must be done manually as the enterprise may use a variety of technologies to implement the process and not only e.g. BPEL as many state-of-the-art tools support today. Even if one technology as e.g. BPEL is used, the enterprise may be using its own implementation patterns which cannot be generated because the transformations are hard-coded into the BPM tools. The approach behind current BPM tools is similar to the extinct Computer Aided Software Engineering (CASE) tools from the 90es. They also often used a standard modeling language, one implementation technology and a standardized transformation. Their limited flexibility in supporting enterprise specific standards was one of the reasons why they were never accepted (Windsor, 1986; Flynn et al., 1995). This chapter takes another approach than stateof-the-art BPM tools. In order to avoid the CASE trap we must come up with an approach that allows an enterprise to use its own modeling notations and specific use of technology. Our hypothesis is that this can be achieved through applying the basic model driven development (Stahl et al., 2006) principles of direct representation and automation (Booch et al., 2004) to BPM tools; An enterprise
836
should be able to model its business processes directly in enterprise specific concepts, decide on a target platform and write transformations that encapsulate its specific use of technology, and that automate the generation of code. This leads us to the research question which we will answer through this chapter: Does an enterprise specific BPM tool improve the efficiency and quality of modeling and implementing business processes, how difficult is it to create, and is it worth the effort?. We use a design research approach to answer this question; We will implement above hypothesis though an experiment where we develop BPM languages, tools and transformations for a specific enterprise. Successively, these will be empirically evaluated to show the validity of the hypothesis. We use Danske Bank, the second largest financial institute in northern Europe, as a case study. In lack of sufficient industrial standards, Danske Bank has defined its own development process and uses a number of different tools to support it. This has caused several challenges as described by Brahe (2007). A prototype tool was developed to show that it provide value to develop BPM tools fitted for the needs of a specific enterprise. The prototype illustrates that it is possible to do model driven development of a business process with nearly 100% code generation. The prototype is fitted specially for Danske Banks development process and consists of three different Domain Specific Languages (DSLs) (Mernik et al., 2005) and corresponding editors that are used to model a business process and related information. Furthermore, the tool provides transformations between the DSLs and a transformation to BPEL. These transformations capture implementation patterns specific for Danske Banks modeling standards and use of the implementation technology. Manual changes can be introduced into the generated BPEL code by a persistence utility feature. We use a fictitious Project called Customer Quick Loan throughout the chapter. First, we il-
Enterprise Specific BPM Languages and Tools
lustrate the current development process in Danske Bank and the observed challenges of using current BPM tools. Second, we show how the prototype tool eliminates these challenges. We conclude that BPM tools customized to a specific enterprise potentially have a huge effect on the efficiency of a project team and will result in implementations with fewer errors. However, we also conclude that developing BPM tools from scratch requires high expertise and much effort and is a strategic decision that many enterprises will not take. What we need is a tool based framework that allows the enterprise to customize languages, transformations and tools to their specific needs instead of develop them from scratch. The rest of the chapter is organized as follows. Section 2 introduces Danske Bank and related work. Section 3 introduces the fictitious Customer Quick Loan project. Section 4 discusses challenges regarding the development process and used modeling tools. Section 5 abstracts the development process into metamodels and algorithms for transforming models into code. Based on this abstraction, the developed BPM tools are described in section 6. Section 7 describes an empirical evaluation of the tool. Section 8 discusses the results and section 9 summarizes the chapter and outlines future work.
bAcKGrOUND This section introduces Danske Bank and presents related work.
Danske bank Danske Bank has grown to become the largest financial group in Denmark - and one of the largest in northern Europe. It comprises a variety of financial services such as banking, mortgage credit, insurance, pension, capital management, leasing and real estate agency.
Support for executing and automating business processes is achieved through different process execution engines. One of them is batch execution of process implementations in PL1 and COBOL. Another one is a BPM system from IBM, where the business processes are implemented in BPEL. The BPM system has been extended in areas where business requirements were not fulfilled. For example, the enterprise has created its own Human Task Manager to handle and distribute human tasks that are part of an executable workflow and its own task portal where process participants claim and execute human tasks. Furthermore, it has specific uses of BPEL fault handlers and has defined specific strategies of capturing business and technical faults during process execution. Danske Bank has defined its own development process as no standardized development process as e.g. the Rational Unified Process (Kroll and Kruchten, 2003) was sufficient to fulfill their requirements. Business and IT solutions are developed as one for any business problem. This is in contrast to most development methodologies which focus on producing software solutions. It is based on service oriented principles (Erl, 2005) where business requirements are mapped into required business services and processes. All important stakeholders are represented at a project team to ensure that different issues are addressed. This includes defining efficient business processes and specifying and developing IT systems that may support them. A project team includes business process participants, business analysts, solution architects, system developers and test specialists. Most requirements and design decisions are captured in models. They are used for documentation and communicative purposes and as blueprints for the implementation. The development process includes specialized modeling notations and creation of different modeling artifacts.
837
Enterprise Specific BPM Languages and Tools
related Work Only limited work has previously been reported on customizing business process modeling notations and corresponding tools to a specific domain and enterprise. In general, most modeling languages like Petri nets (Murata,1989), Event-driven Process Chains (EPC) and the Business Process Modeling Notation (BPMN) (White, 2006) only have one notation which all domains have to follow. An exception is UML activity diagrams that can be extended by a profile for a specific domain. UMLActivity diagrams are used by both academia and industry for its extensibility and available tool support in form of general UML modeling tools. Dumas and Hofstede (2001) argue based on workflow patterns (van der Aalst et al., 2003) that the expressiveness of activity diagrams as a workflow language is large and Guntama et al. (2003) have extended activity diagrams to enable flexible business workflow modeling. There are also various UML profiles for business process modeling, e.g. List and Korherr (2005) who consider both the business process flow as well as the business process context. Brahe and Østerbye (2006) use UML activity diagrams as the semantic base for creating domain specific modeling languages for business process modeling based on UML profiles. They suggest that many enterprises need their own modeling notations and present a prototype tools that allows metamodeling of domain-specific workflowbased languages and automatically generation of domain-specific tool support in form of editors. Jablonski and Götz (2007) have a similar approach. They present a flexible and extensible metamodel and the concept of perspective oriented business process visualization that allows multiple visual presentations of a business process model. Model Driven Development (Stahl et al., 2006) and the Model Driven Architecture (Frankel, 2003) have been used extensively in transforming business process models to implementations, particularly from UML activity diagrams to service
838
composition languages e.g. Bézivin et al. (2004); Bordbar and Staikopoulos (2004a,b); Skogan et al. (2004); Koehler et al. (2003, 2005). Common for all approaches is the use of a fixed modeling notation and a fixed transformation. In contrast, Brahe and Bordbar (2006) present a transformation framework that builds upon the use of domain specific business process modeling languages and customized transformations. They also introduce a prototype implementation that allows definition and execution of customized and pattern-based transformations for a domain specific modeling language. Fowler (2005) talks about Language Workbenches as tools that allow definition and usage of Domain Specific Languages, editors and transformation between languages. Several of such workbenches exist such as MetaEdit+ (Tolvanen & Rossi, 2003), GME (Ledeczi et al., 2001), Microsoft DSL tools, and many others. The Eclipse projects used to build Danske Bank Workbench can also be considered as a language workbench. The research presented in this chapter follows cutting edge trends in language workbenches; models should be constructed in domain or enterprise specific concepts and transformed into an implementation. The experiment presented in this chapter has been documented in details in a technical report (Brahe (2008)).
EXAMPLE: cUstOMEr QUIcK LOAN The fictitious project “Customer Quick Loan” will be used for illustrative purposes throughout the rest of the chapter. Changes in consumer patterns have required immediate action for introducing a new type of customer loans. The new loans can be requested from email and mobile phones with possible immediately approval and transfer of the requested amount to the customers account. A project team is established which includes a loan specialist,
Enterprise Specific BPM Languages and Tools
a business analyst, a solution architect, system developers and a project manager. They name the project Customer Quick Loan. In the following we will see how the project team follows Danske Banks development process to model business events, design the solution, specify the physical design and implement it as an executable workflow.
business Events The business analyst defines all possible business events that may occur for a given business case. A business event is defined as an occurrence that influences a business area and which initiates a well defined business process in this area. The events are described in a model called an event map. For the Customer Quick Loan the primary events are ApplyForLoan, which occurs when a customer requests a loan, and PayoffLoan, which occurs each month after a loan has been created. The event map is defined as a UML class diagram in Rational Software Modeler (Swithinbank et al., 2005) (Figure 1). Additional information about the events is specified in MS Word documents. An
event is classified as external if it is invoked by an actor as e.g. a customer for another department in the enterprise. It is classified as timedependent if the event is occurring at a certain point in time. The ApplyForLoan event is external as it is invoked by a customer while the Payoff loan is time dependent as it is invoked once a month.
solution Process Flow When a business event takes place it will involve execution of some business logic. For each business event, the business analyst together with a solution architect and possible a system developer will model such business logic as a Solution Process Flow (SPF), which is a technology independent or logical business process model. Each task in the Solution Process Flow must either be of type Automatic, which means handled automatically by a service invocation, Manual as e.g. moving papers from a table to an archive or UserExperiance as e.g. creating a loan using an application user interface. The IBM Websphere Business Modeler is used to define these models.
Figure 1. Business events for the Customer Quick Loan project modelled as a UML class diagram in Rational Software Modeler.
839
Enterprise Specific BPM Languages and Tools
The Solution Process Flow for the primary business event, ApplyForLoan, is illustrated in Figure 2. It consists of four logical tasks; the automatic ApproveLoan task will make a risk calculation of the customer. If the risk is high, the loan request is rejected; a process participant will be notified by the Reject task of type UserExperience, and will have to send a rejection message to the customer using an application interface. If the risk is low, the loan, or possible several loans applied for at once, will be created by the automatic CreateLoans tasks, and a confirmation will be sent to the customer by the Confirm task. The project team has examined the local service repository for existing services and has found that two existing service operations called in a sequence will fulfill the requirement for a Confirm task in the Solution Process Flow. Therefore, the Confirm task is further broken down and modeled in a separate sub process as illustrated in Figure 3. First, a service operation is invoked to create the content of the confirm message, and second, a service operation is invoked to send the message by SMS, email or letter.
Physical Design Some business processes may be implemented in BPEL, others may be implemented in PL1 or
COBOL for batch execution, and finally some may not be implemented by IT at all. The project team decides to automate the execution of the ApplyForLoan process by implementing it in BPEL. Two kind of physical specifications now have to be made: BPEL process design, also called Control Flow Behavior, and a Workflow Specification, which contains additional information required to implement the Solution Process Flow and all its tasks in the BPM system.
control Flow behavior The Solution Process Flow is the starting point for the Control Flow Behavior, a model of the physical implementation in BPEL which is also created using Websphere Business Modeler. Three physical design decisions make them different. This is described in the following three sections.
Separate vs. Inlined Subprocess It must be decided if the Solution Process Flow should be implemented as one BPEL process, or if it should be broken down into several. Extracting parts of the process into sub processes has advantages: More than one developer can simultaneously work on the construction, it is easier to make a change and deploy a small sub
Figure 2. Solution Process Flow for the ApplyForLoan business event modeled in Websphere Business Modeler.
840
Enterprise Specific BPM Languages and Tools
Figure 3. The Confirm task modeled as a sub process.
process compared to change and deploy the main process, and a sub process can be reused by other processes. Though, extensively use of sub processes has the disadvantages of maintaining and operating several processes instead of one main process. This causes an overhead and introduces complexity regarding change management. The developer decides to implement the Confirm sub process as an inlined sub process in the ApplyForLoan BPEL process. This is illustrated in Figure 4. The developer could also have implemented the sub process as a separate BPEL process.
Technology Dependent Functionality Functionality needed for the implementation in a specific technology should not be modeled in the Solution Process Flow. Using BPEL, this could be
complex data transformations inside in a BPEL process that are externalized as separate service invocations, it could be synchronization of data between different systems that make up the extended BPM system in Danske Bank, or it could be a specific service invocation that updates the business state for the specific process instance, a feature of Danske Banks extended BPM system. An additional service invocation has to be inserted in the Control Flow Behavior for the ApplyForLoan process after the AssessRisk service invocation. It updates the business state of the process instance to either “Approved” or “Rejected”. This information can be viewed by employees in the Danske Bank through the Human Task Portal. The additional functionality is not required if the Solution Process Flow was implemented using another technology like COBOL (see Figure 4).
Implementation Patterns Each task in the Solution Process Flow has to be mapped to a task in the Control Flow Behavior. A task can be implemented by different implementation patters. In this context implementation patterns are patterns, or code templates and rule used by Danske Bank to implement tasks of different types. An Automatic task type can be
Figure 4. Control Flow Behavior of the ApplyForLoan SPF.
841
Enterprise Specific BPM Languages and Tools
implemented by three different patterns; ServiceOperation, MultipleInstances and Bundle. Tasks of type Manual and UserExperience are always implemented using a HumanTaskManual or a HumanTaskGUI pattern. The patterns are explained a later section. When modeling the Control Flow Behavior, these pattern names are used to classify all tasks in the same way as the Automatic, Manual and UserExperience classifiers were used in the SPF model. Table 1 lists how tasks from the ApplyForLoan Solution Process Flow have been mapped to the Control Flow Behaviour. The implementation pattern to be used in the physical design is determined from the task type in the SPF and the description of the task in the corresponding System Use Case.
Workflow specific Information Much information has to be specified to implement the Control Flow Behavior in the BPM system. For a ServiceOperation task this includes information about which service operations to invoke, input and output data structures, exception handling and escalation of errors, if the task must be restarted in case of failures during service invocation etc. A task of type HumanTaskManual or HumanTaskGUI is a task handled by humans. Process participants will be able to list, claim and execute such a task from a task portal. For both type of tasks following information is needed; groups allowed to claim and execute a task are defined as Allocation Rules, labels, descriptions and Table 1. Solution Process Flow tasks mapped into Control Flow Behavior.
842
Task
Implementation pattern
AssessRisk
ServiceOperation
CreateLoans
Bundle
CreateContent
ServiceOperation
SendMessage
ServiceOperation
Reject
HumanTaskGUI
data values in three to five different languages must be described to be presented to the process participants in the task portal, and rules about earliest start of the task and a possible deadline and several others also has to be specified. The HumanTaskGUI task further has a link to an existing application interface where the process participant has to handle the task. It must also be specified which data values from within the running BPEL process instance the link should transfer to the business system. For the process itself, additional information also has to be specified. This includes input data for the process, allocation rules, and description in several languages for presentation in the task portal. All information for one task is specified in a MS Word document and is called a Workflow Task Specification. A Word template is available for each task type which describes required information. Six task specification documents are created for the ApplyForLoan process.
bPEL construction All required information and design decisions are now available, and the BPEL process can be constructed after the Control Flow Behavior and Workflow Specification have been completed. A system developer now maps the Control Flow Behavior into a BPEL process. From the Workflow Specification he is able to specify input/ output data, set attributes about the process as e.g. when it is valid from, if it is a long running process etc. Also other systems as e.g. Danske Banks proprietary Human Task Manager can be populated with allocation rules specified in this document. Each task in the Control Flow Behavior is mapped to the BPEL implementation based on the developer’s knowledge of BPEL implementation patterns in Danske Bank. Each task type introduced in previous section has a certain BPEL template and an algorithm for how to implement it. The pattern names and corresponding BPEL templates are illustrated in Table 2.
Enterprise Specific BPM Languages and Tools
The Service Operation pattern invokes a service operation and incorporates specific way of using logging and fault handling. All service operations in Danske Bank throw a Technical Fault, which is caught by the fault handler for the Invoke node. The fault handler forces the invoke node into stopped state. The MultipleInstances pattern is a loop containing a service invocation as implemented by the Service Operation pattern. It is similar to the workflow pattern “Multiple instances without priori runtime knowledge” (van der Aalst et al., 2003). Table 2. Danske Bank specific BPEL implementation patterns. The dots are replaced with information from the Control Flow Behavior and the Workflow Specification documents. Pattern name
BPEL template
ServiceOperation
. . ..
MultipleInstances
. . ..
Bundle
. . .. . .
HumanTask
. . ..
The service operation invoked in the loop may initiate another process or thread that runs concurrently. For some business scenarios the main business process is not allowed to continue before all initiated processes behind these service invocations have finished. Danske Bank has extended the BPM system with infrastructure functionality that allows such a mechanism. In the BPEL process it is called the Bundle pattern and is implemented as the MultipleInstances pattern followed by an event. At runtime after invoking the service operation a number of times, the main process will wait until all the initiated concurrently running processes have finished. The BPM infrastructure extension will be notified about the state change and will fire the event that will cause the BPEL process to continue executing. The HumanTaskManual and GUI patterns are implemented by invoking a specific service operation exposed by Danske Banks Human Task Manager followed by an event node. The translation of a task and its related information is purely manual, even though it is the same patterns that are implemented multiple times. Above descriptions only show a subset of the implementation steps that the developer has to go through when implementing the tasks from the Control Flow Behavior. Common for all patterns is that data mappings also have to be specified before invoking a service operation. Control flow logic also has to be specified by the developer. This is described at the edges that connect the tasks in the Control Flow Behavior model.
A NEED FOr cUstOMIZED tOOLs The development team faces several challenges in using the described BPM tools and development process: Difficult to use domain concepts. Danske Bank has defined its own concepts for modeling business processes, but it is not possible to create models by directly using these concepts. Tools
843
Enterprise Specific BPM Languages and Tools
have been twisted and tweaked to force them to behave as desired. The usability is low and it is hard to use the models for communicative means. Difficult to comprehend information. A number of different tools are used to describe and specify how a business process should be implemented. The developer and the architect therefore need to look into several different tools and models to find relevant information. Missing traceability and consistency. It is difficult to find relevant models because traceability between models is handled by textual descriptions. Furthermore, a model created in one tool cannot refer to models created in other tools. Consistency between models must therefore be handled manually. Imprecise data definitions. The data definitions can only be interpreted by humans. For instance, the name and version of the service operation is specified in the MS Word Workflow Specification document. Because of above challenges, transformation of models and information from specification and physical design into physical artifacts as e.g. BPEL have to be done purely manually. The system developer needs to open models in Rational Software Modeler and Websphere Business Modeler and retrieve information manually, and he/she must open many MS Word documents to get detailed information about design decisions. Although model driven development is the goal of the development process, the result is mere a document driven development process. For the simple example of the ApplyForLoan business process, the number of models and documents get high even for a simple example with only four tasks. One RSM model, two WBM models and about 10 word documents make up the specification. It is quite difficult to comprehend the large amount of distributed information required for constructing the BPEL code. Further, the construction process is inefficient and error-prone as much of the information from the specification has to be manually reentered into the physical artifacts.
844
The core of the problem is that the commercial tools used presume one development process defined by the software vendor, a fixed set of modeling languages and a specific way to use the implementation technology. This is in deep contrast to the requirements from Danske Bank who found the standard development process and standard notations insufficient for their needs. They need to build their own development process into the tools, to use their own modeling notations and artifacts and to define their own use of technology.
AbstrActED DEVELOPMENt PrOcEss In order to develop tools that address above challenges, we need a formal description of required information and transformational algorithms. In this section we therefore use the Customer Quick Loan example to abstract the development process into metamodels and transformations. First, we give an overview of the current development process and describe requirements to a model driven development process. Second, we introduce the abstracted development process, which uses the metamodels and transformations that we will develop in this section. Last, we define these metamodels and describe algorithms of how to carry out the transformations. The metamodels and transformation algorithms form the basis for the prototype tool described in next section, which has been developed specific for Danske Bank.
current Development Process Figure 5 gives an overview of the development process described in previous section, and illustrates the created artifacts as well as design decisions. The artifacts are depictured with rounded boxes to indicate they are not precisely modeled, and the clouds indicate decisions that are not documented but instead put directly into models or code. Much
Enterprise Specific BPM Languages and Tools
of the information required through the development process is described as plain text. A human must read and interpret it to be able to construct the implementation. The cloud between the Solution Process Flow and the Control Flow Behavior illustrates that decisions about how the physical design are taken by the architect or developer; First, for each sub process modeled in the SPF it must be decided if it should be implemented as an inlined flow or as a separate process. Second, additional functionality must be specified. By defining the Control Flow Behavior from scratch, but inspired by the Solution Process Flow, the possibility to have tool based consistency check between them is lost. The Control Flow Behavior model needs to be manually updated each time the Solution Process Flow changes. The cloud between the Control Flow Behavior model and the BPEL code indicates decisions taken by the developer about BPEL specific information as e.g. the name of the project where the code is being developed, default package name, target namespace to use in the BPEL process, if generated WSDL files are kept in separate directories, etc.
and then generate the implementation code. In our example this means transformation of a Solution Process Flow into a Control Flow Behavior from which the BPEL implementation and related documents can be generated. In general, three basic requirements must be fulfilled to enable an efficient model driven development process:
requirements to a Model Driven Development Process
Figure 6 illustrates the model driven development process that we will describe through the rest of this section. It uses metamodels, called Eventmap, SPF and WFSpec, for modeling event maps, Solution Process Flows and Workflow Specifications.
One of the main ideas behind model driven development is to have tools that can transform platform independent models to platform specific models,
Information and design decisions must be specified precisely in models. Transformation between models must be formally described. Information added to generated models or code must survive future transformations. Creating precise models requires availability of languages or metamodels that support modeling standards and which allow modeling of all required information in a precise manner. As Danske Bank has created its own notations and use technology in specific ways, they need to be able to express this in their models.
Abstracted Development Process
Figure 5. Current development process with main modeling artifacts and decision points. The clouds indicate that decisions are not documented, and the rounded boxes indicate that no metamodels are used.
845
Enterprise Specific BPM Languages and Tools
Figure 6. New development process with metamodels and transformations. Information is specified precisely by using the Eventmap, SPF, WFSpec, BPELCodeGen and ModelInjections metamodels.
It further uses a ModelInjection metamodel, and a BPELCodeGen metamodel. They are used to capture decisions currently taken in the “clouds”. The metamodels form the basis for algorithms that can generate models and code. The BPELCodeGen metamodel is used to describe specific BPEL implementation decisions, while the WFSpec metamodel and the ModelInjection metamodel are used to describe the three differences between the Solution Process Flow and the Control Flow Behavior described in a previous section. Decisions about how to implement sub processes modeled as part of the Solution Process Flow is captured by the WFSpec metamodel. It also specifies workflow specific information for each task in the Solution Process Flow. Additional technical functionality is modeled as separate process fragments, also called model injections as it is to be injected at a specific point in the Solution Process Flow to generate the Control Flow Behavior. Process fragments are modeled using the Solution Process Flow metamodel. The relation between a process fragment and where to inject it is captured by a ModelInjection metamodel. The implementation patterns to be used for implementing tasks in the Solution Process Flow are documented by the WFSpec metamodel, for instance that an Automatic task is implemented by the ServiceOperation or the Bundle implementation pattern.
846
The Control Flow Behavior model has disappeared as it is indirectly generated from the Solution Process Flow, the WFSpec model and theModelInjection model. The development process illustrated in Figure 6 has been implemented in the prototype tool described in next section that uses the metamodels to capture information precisely and transformations to automate the generation of the BPEL implementation.
Metamodels The five metamodels introduced above will now be described. They have been developed by analyzing the current development process. This includes discussions with development teams, enterprise architects and examination of educational material.
Eventmap Metamodel The Eventmap metamodel, depictured in Figure 7, expresses how events can be modeled and related. The metamodel has incorporated all information that previously was described as plain text in MS Word documents. Inheritance has been used to define the External and Timedependent event types and requirement for different information. An abstract Event metaclass contains attributes for information common for both types of events while the TimedependentEvent and the ExternalEvent
Enterprise Specific BPM Languages and Tools
Figure 7. Event map (EventMap) metamodel. An event can either be external or timedependent and consists of a number of scenarios.
subclasses contain specific attributes, which were previously described in MS Word documents.
Solution Process Flow Metamodel A Solution Process Flow is constructed for each event modeled in the event map. The SPF metamodel is illustrated in Figure 8. It is a simple flow based metamodel that reminds much of a UML activity diagram. The difference is the use of the domain specific task types, i.e. Automatic, Manual and UserExperience, and the domain specific edges, i.e. ProcessConnection, DialogConnection and ProcessDialogConnection.
Workflow Specification Metamodel The Workflow Specification (WFSpec) metamodel, illustrated in Figure 9 is a formalization of the Workflow Specification previously defined
in MS Word documents. A WFSpec model refers directly to a Solution Process Flow model instead of referring to a Control Flow Behavior model, as this is not explicitly modeled after the introduction of the Model Injection concept in previous section. Much information is required by the WFSpec metamodel, therefore only selected parts of it are described here. The SPF4WFM metaclass is the main element. It refers to a Solution Process Flow model and has several attributes specifying information required for implementing the BPEL process, e.g. a deadline rule, process lifetime information, allocation rules about process responsibility, department owner, process type etc. Many of these attributes are specific for Danske Bank as a BPEL process implemented in the BPM system is a part of a larger proprietary case system that extends the commercial BPM system with additional functionality.
847
Enterprise Specific BPM Languages and Tools
Figure 8. Solution Process Flow (SPF) metamodel. Tasks are modeled by the Automatic, Manual, UserExperience and SubProcess tasks types and connected in a control flow by using edges of type Process, Dialog or ProcessDialog.
The SPF4WFM metaclass contains a number of Task- Specification elements. A TaskSpecification can either be an AutomaticSpecification, ManualSpecification, UserExperienceSpecification or a SubProcessSpecification. An Element of one of these metaclasses refer to a task of type Automatic, Manual, UserExperience and SubProcess respectively. A TaskSpecification specifies required additional information for the implementation in BPEL and which implementation pattern to use. Previously, information about the implementation pattern was stored directly in the Control Flow Behavior while additional information was stored in MS Word documents.
848
Addtional Metamodels Two more metamodels have been defined to capture additional information in the development process. The BPEL code generation (BPELCodeGen) metamodel is used to store decisions of how to implement a physical design in BPEL. It refers to a Solution Process Flow and specifies target namespace to use for the BPEL process, name of the project that should contain the BPEL process, base package name to define the BPEL process in, if WSDL files should be located in the separate folders, and the name of the base WDSL folder name. In previous section it was described how the Control Flow Behavior could be generated based on the Solution Process Flow model and model injections. The ModelInjection
Enterprise Specific BPM Languages and Tools
Figure 9. Workflow specification (WFSpec) metamodel. The SPF4WFM metaclass refers to a Solution Process Flow and contain a number of TaskSpecifications. A task specification refers to a task in a Solution Process Flow and can be of type Manual, UX, Automatic or SubProcess
849
Enterprise Specific BPM Languages and Tools
metamodel keeps track of all process fragments to inject and where to inject them in a Solution Process Flow model.
transformations Now, when all information required during the development process can be stored precisely in models, we are able to explicitly define the transformation algorithms illustrated in Figure 6 that depictured the abstracted development process. Here, we only give a short description of the algorithms. Detailed descriptions in pseudo code can be found in Brahe (2008).
From Event to Solution Process Flow One Solution Process Flow model has to be created for each event in the eventmap. Algorithm 1 in Figure 6 creates an empty Solution Process Flow model. It is named and stored according to Danske Banks standards. The analyst and the architect model the behavior of the business process inside the generated model.
From SPF to Workflow Specification Algorithm 2 in Figure 6 generates the Workflow Specification (WFSpec) model based on the Solution Process Flow model. It is named and stored according to Danske Banks standards. A TaskSpecification class is generated and added to the WFSpec model for each task in the Solution Process Flow. The WFSpec model contains all required information, but all attributes contain default values. Successively the architect therefore fills it with correct information.
Generation of the BPEL Implementation The BPEL implementation can be generated by Algorithm 3 in Figure 6. All required information has been specified; information about each task is stored in the WFSpec model, process fragments
850
has been modeled and the ModelInjection model created, and BPEL specific attributes have been defined in the BPELCodeGen model. Algorithm 3 merges information from these three models with information from the Solution Process Flow model and generates BPEL code and related artifacts. Only control flow logic and data mappings have not been generated.
tOOL DEVELOPMENt In this section we describe a tool called Danske Bank Workbench (DBW) that was developed as part of this research. It implements the metamodels and transformations described in last section. Hence, it directly supports Danske Banks development methodology, modeling concepts and use of technology.
tool Architecture Danske Bank Workbench is built on Eclipse platform (Eclipse, 2008) and various Eclipse open source projects. The Eclipse Modeling Framework (EMF) (Budinsky et al., 2003) has been used for defining the abstract syntax, or metamodels of the DSLs, that have been implemented. The concrete syntax of the DSLs and editor support have been implemented by using the Graphical Modeling Framework (GMF, 2008), while openArchitectureWare (oAW) (oAW, 2007) has been used to implement the semantics of the DSLs as modelto-model and model-to-text transformations. Danske Bank Workbench consists of several independent tools for developing the different artifacts in the development process. These are depictured in Figure 10, which also illustrates dependencies to other Eclipse projects. The names of the projects conform to the names of the metamodels previously described.
Enterprise Specific BPM Languages and Tools
Figure 10. Overview of Danske Bank Workbench and its dependencies of other Eclipse projects.
Metamodels and Editors All metamodels have been modeled in Rational Software Architect as UML class diagrams. Each of these were exported as an XMI representation of UML and imported into Eclipse by using the EMF model creation wizard which comes as a part of the EMF project. The GMF editors were created based on the EMF metamodels. Figure 11, Figure 12 and Figure 13 illustrate the GMF based Eventmap, SPF and Workflow Specification (WFSpec) editors in action.
transformations The three transformation algorithms described previously have been implemented in oAW. Algorithm 1 and 2 has been implemented as model-tomodel transformations using the Xtend language. Algorithm 3, which is supposed to generate BPEL code, has been implemented as a model-to-text transformation using the Xpand language. The implementation is quite complex. It is implemented as a graph transformation that recursively runs through the SPF control flow starting with
the initial node. When a model injection or a sub process is detected, the corresponding process fragment or sub process is be interpreted and BPEL code generated, which must then be merged into the partly generated BPEL code. It requires much book keeping handling the associations between models as four different models, i.e. SPF, WFSpec, ModelInjection and BPELCodeGen are used as input to the transformation. A number of utility functions have been written in Xtend and in Java to support this.
tool Utilities Several tool utilities have been developed to enhance usability of Danske Bank Workbench and to smoothen the use of the different tools. The users of the tools are guided from one step in the development process to the next by using these utilities.
Transformation Execution One kind of tool utility is the generation of “the next” development artifact in the development pro-
851
Enterprise Specific BPM Languages and Tools
Figure 11. Event map editor with events for the CustomerQuickLoan project. External and Timedependent events can be modeled directly from the tool palette and required information can be specified as properties.
cess, i.e. execution of transformation workflows that implement Algorithm 1, 2 and 3. They are implemented as actions that appear on the context menu when the user right-clicks on an event in an Eventmap model, at the canvas for an SPF model and at the canvas for a WFSpec model.
The executed action looks up the defined service operation in (a mock up of) Danske Banks centralized service repository, retrieve definitions of data structures and updates the WFSpec model with these.
Service Repository Data Extract
Persistence of Manually Changed BPEL Code
The architect has to find definitions of input and output data structures for service operations and put them into the WFSpec model. The user rightclicks on a task specification for an Automatic task and chooses “Retrieve Repository Data”.
Generated BPEL code needs to be updated with data mapping and control flow logic. A small persistence framework has been developed that allows the developer to persist logic from within an assign node or a control link in a separate file.
852
Enterprise Specific BPM Languages and Tools
Figure 12. Solution Process Flow editor. The ApplyForLoan process is modeled. Task and connection types are available from the tool palette. The concrete syntax is customized for tasks as well as edges.
Figure 13. WFSpec editor with task specifications for the ApplyForLoan process. Information can be modeled precisely for Automatic, Manual and UserExperience tasks.
853
Enterprise Specific BPM Languages and Tools
The developer simply right-clicks on the assign node or control link and chooses “Persist element”. The action creates a separate file where the assign or control flow logic is persisted. Next time the transformation that implements Algorithm 3 is executed, the changed BPEL code is overridden, but successively, the persisted changes are copied into the newly generated BPEL code.
customer Quick Loan retooled Danske Bank Workbench will now be illustrated by applying it at the example. Figure 14 illustrates a workflow of the development process with the artifacts that are created and the transformations between them. The letter tags in the figure refer to screen dumps of tool utilities and artifacts developed for the ApplyForLoan process. After Figure 15 they are depictured in Figure 16, Figure 17, and Figure 18 which can be found in the end of the chapter.
Eventmap First, a business analyst creates a new Eventmap. All business events are now modeled as either external or timedependent, and scenarios are added to each event (Figure 16a). The editor provides direct support for these concepts from the tool palette. The analyst simply drags and drops events and scenarios to the canvas. The property view reflects properties for the selected event type, where e.g. priority can be selected as low, medium or high and business possibilities can be described. Event types and properties directly reflect the defined Eventmap metamodel.
Solution Process Flow After finishing the event map, the business analyst has to create a Solution Process Flow for each event. The analyst simply right-clicks on the event, for instance the ApplyForLoan event, in the Eventmap editor and chooses “Generate SPF”
854
(Figure 16b). An empty Solution Process Flow is now generated in a subfolder named “SPF” and is given the same name as the business event. It is then modeled by either the business analyst or the solution architect. Tasks may now be modeled directly as Automatic, Manual or UserExperience (as defined by the SPF metamodel) by dragging them directly onto the canvas from the tooling palette. Connections of type Process, Dialog or ProcessDialog are also directly available.
Workflow Specification The architect right-clicks at the Solution Process Flow when it is complete (Figure 16c) and chooses “Generate WFSpec model” (Figure 16d). A WFSpec model is now generated under the Implementation folder and a subfolder named after the Solution Process Flow. It contains task specifications for all tasks and has been populated with default data. The task specifications and all objects inside them conform directly to the WFSpec metamodel. Now, information has to be entered into the specification. For example, the architect defines that the CreateLoans task must be implemented by the Bundle pattern; he or she selects the Operation object in the CreateLoans task specification and in the properties view changes the type from ServiceOperation to Bundle (Figure 16i).
Subprocess The Confirm task is modeled as a SubProcess task type in the Solution Process Flow. The architect chooses that it must be implemented as an inlined flow in the BPEL process by selecting the Confirm task specification in the WFSpec model and sets the Type property to InlinedFlow (Figure 16j). The subprocess to which the Confirm task refer is generated by right clicking on it and choose “Generate SubProcess”. An empty sub process is created and opened automatically. It is named accordingly to the name of the Confirm task and
Enterprise Specific BPM Languages and Tools
Figure 14. Workflow for using tools through the customized development process. Letter tags refer to screen dumps in Figure 16. A thick arrow indicates a tool utility while a thin arrow indi
Figure 15. File structure of the Customer Quick Loan project containing all generated files.
855
Enterprise Specific BPM Languages and Tools
Figure 16. Using Danske Bank Workbench
856
Enterprise Specific BPM Languages and Tools
Figure 17. Using Danske Bank Workbench (cont’d)
857
Enterprise Specific BPM Languages and Tools
Figure 18. Using Danske Bank Workbench (cont’d)
optionally put in a sub directory if the path property at the Confirm task has a value. The subprocess is now modeled as a sequence of two automatic activities (Figure 16g), and its Workflow Specification can be generated (Figure 16h). 858
Model Injection The architect and the developer recognize that an additional task is needed in the physical implementation. The task should set the business state
Enterprise Specific BPM Languages and Tools
of the process instance to either “Approved” or “Rejected” depending of the outcome of the AssessRisk task. They right click on the control link in the Solution Process Flow that connects the AssessRisk task with the CreateLoans, and the Reject task and choose “Create Injection”. An empty model is created under the Injections folder and is automatically opened in an SPF editor. The developer models the process fragment as one automatic activity (Figure 16l) and generates the Workflow Specification for it (Figure 16m). The book keeping of Model injections are handled by the ModelInjection model. This model is illustrated in Figure 16q. It contains one injection that has two important properties; the injection point in the Solution Process Flow, which is the ID of the control link, and the process fragment to inject (SPF to Inject) at the injection point. Before having the WFSpec metamodel and editor, all the design decisions were modeled in the Control Flow Behavior without any reuse of the Solution Process Flow, and required additional information was defined in textural documents. Now, the project team has modeled three processes; one SPF, one subprocess and one process fragment. They are all modeled in the same language and have each a corresponding WFSpec model.
Synchronizing Data All automatic task specifications must be synchronized with the centralized Service Repository to obtain correct input and output data definitions. Figure 16o shows the selected CreateContent task specification in the WFSpec model for the Confirm subprocess. The modeler has right-clicked on it and selected “Retrieve Repository Data”. The operation name for the CreateContent specification has been set to CREATECONTENT. The same operation name exists in the service repository, which can also be seen in the figure. The action now retrieves data definitions from the repository and populates the WFSpec model with these.
Subsequently, the BPEL code generator can use these data structures to create a correct WSDL document for the service operation. Without this import utility, the developer had to find data definitions and create XSD schemas manually.
Code Generation The developer sets parameters for the code generation in a BPELCodeGen model before executing the BPEL code generation. Previously, these design decisions were not documented. Figure 16r illustrates that the developer has selected default values for the code generation; The BPEL code will be generated in the same project as where the models are, and WSDL files will be located in the same directory as the BPEL file. The developer generates the BPEL code by right-clicking on the WFSpec model and chooses “Generate BPEL” (Figure 16n). Figure 15 shows the Customer Quick Loan project and all files generated through the development process and Figure 16p shows the generated BPEL code opened in the Eclipse open source BPEL editor. Only the event map has been created manually. The rest of the artifacts have been created by tool utilities supporting the enterprise specific development process. Hence, the file structure follows specified standards, and the traceability between models can be ensured. Without Danske Bank Workbench these artifacts and all the information bookkeeping are handled manually by the project team.
EVALUAtION The tool was evaluated through an empirical test which involved five people employed at Danske Bank. They have all worked as workflow developers. Two of them have experiences from working as - or closely together with - a business analyst, and one of them is a solution architect. They used Danske Bank Workbench to model an event map,
859
Enterprise Specific BPM Languages and Tools
a Solution Process Flow and a Workflow Specification and generate the BPEL code. The business scenario was the same as presented in this chapter. They got a one-page description of how to use the tool. From the description they used about 30-40 minutes to complete the exercise. They filled out questionnaires with 12 questions and were interviewed about their experiences with the tool. Each question asked about the experience of using the tool compared to current practice in Danske Bank. The questions and their ratings can be found in Table 3. The developers found that the tool would improve their productivity significantly and it would be easier to work with. Especially, they were happy with the Danske Bank specific modeling capabilities. It was much easier and intuitive to work with domain specific modeling. Further, it was easier to comprehend the workflow specific information that had to be specified for the Solution Process Flow (question 1-5). Some of the developers suggested that validation rules would improve the development process as a modeler would be caught if required information was not specified. In general, they found that model driven development would help improving their daily
work. This is reflected in question 6, 7, 9, 10 and 11. They all got an average score at 5 out of 6, which is equal “Better”. One of the developers suggested that data and control link logic should also be modeled to allow 100% code generation. Some of the developers were quite skeptical about the quality of the generated code as they suspected that manually written code would perform better than generated one (question 8). However, they thought that the number of errors would be lower in generated code compared to manually written code (question 12). The evaluation has several limitations; The number of participants could be increased and include a more diverse population of people, the case could be extended to a realistic business scenario, and the questions could be accomplished with measurements of the number of errors in-, and the efficiency of the code. Further, a control group of developers could use the current method and tools in Danske Bank at the business scenario, and the results from the two groups could be compared. However, making a realistic evaluation of a prototype tool in an industrial setting is extremely difficult. It is hard to get permission to use time from the right people, the tool is not mature enough
Table 3. Questions and answers for the empirical evaluation. The ratings were: 1 is “Much worse”, 2 is “worse”, 3 is “a little worse”, 4 is “a little better”, 5 is “better” and 6 is “much better” Question
Mean Value
1
How is the Danske Bank specific syntax to work with compared to Websphere Business Modeler?
5.5
2
How is it to work with the WFSpec editor compared to MS Word?
5.6
3
Is the information easier to comprehend and access?
5.2
4
How is it to comprehend the number of modeling artefacts and locate where they are?
5.4
5
Are the tool utilities helpful and support the developement process?
5.4
6
Is the code generation to prefer over manual translation?
5.0
7
Do you believe in model driven development as the right direction to go in?
5.0
8
How is the quality of generated code compared to manually written code?
3.8
9
Do you prefer to model and generate the solution instead of manually implement it?
5.0
10
Does the tool eliminate tedious work?
5.0
11
Will the tool influence on the development productivity?
5.0
12
Will the tool decrease the number of errors in implemented code?
4.8
860
Enterprise Specific BPM Languages and Tools
for large scale testing, and the tool requires more development to model a realistic business scenario. Despite these limitations, the evaluation of the tool shows that it provides significant improvements over the current use of commercial tools in Danske Bank.
DIscUssION We have used Danske Bank Workbench for modeling and implementing the ApplyForLoan business process. The exemplification of the tool and the empirical evaluation has shown that the development process becomes more efficient as the different experts are supported directly in their work. They are able to use familiar domain concepts directly in the modeling tools, they are guided to provide correct information, and execution of the transformation algorithms has been automated. We have shown that it is possible to define and utilize a number of DSLs and tools to effectively support an enterprise specific development process for business process modeling and implementation. Danske Bank Workbench is a prototype, and therefore it has a number of limitations and points for improvements; Consistency checking. We have not defined methods, nor implemented tools to check consistency between different models. Validation and modeling constraints. Validation rules and constraints on how a model can be constructed should be specified by the team responsible for defining the metamodels. These rules and constraints should be handled by the modeling tool to avoid creation of invalid models. Controlflow. Several controlflow structures cannot be handled by the transformation such as cyclic behavior and loop constructs. Data mapping. It might improve the prototype and the development process to abstract the definition of data mappings to either the WFSpec model or to a generated Java class which would be responsible for the data mapping.
Restrictions on the SPF. The prototype only supports a one-to-one relationship between a Solution Process Flow and a BPEL implementation. In reality there are often cases where an SPF might be divided to several BPEL processes or several SPF’s may be merged into one BPEL implementation. Implementing above items in Danske Bank Workbench is a demanding task; First, it requires further analysis of requirements in Danske Bank. Second, it requires design and implementation of several advanced tool utilities. Especially, the last item may show up to be very hard to specify and implement.
sUMMAry AND FUtUrE WOrK In the introduction of the chapter we postulated that general purpose business process modeling and implementation tool suites are not feasible for many enterprises. Using the case study of Danske Bank and an example we showed that a development team faces many challenges when they use standard modeling languages and tools but have to use enterprise specific modeling notations, follow an enterprise specific development process and use technology in specialized ways. We abstracted the development process into metamodels and transformational algorithms and developed a tool called Danske Bank Workbench, fitted specially for Danske Bank. The tool implemented the model driven development principles of direct representation and automation as it allowed creating models directly in Danske Bank specific concepts and it automated the generation of lower level models and code. We saw through the example that it is possible to achieve an efficient model driven development process where a project team collaborate to create different modeling abstractions of a business process with tool based transformations and ensured synchronicity between the different modeling abstractions. Using the tool, information only
861
Enterprise Specific BPM Languages and Tools
has to be defined once, and it is easy to comprehend. Knowledge of implementation patterns is reused by automated transformations. Several tool utilities support the development process which makes Danske Bank Workbench very efficient to work with. An empirical evaluation of the tool confirmed this. Hence, we have confirmed the hypothesis that was set up in the introduction, which stated that applying the basic model driven development principles of direct representation and automation to BPM tools would solve many of the experienced challenges. Danske Bank Workbench was not difficult to build as many language workbenches exist for building metamodels, editors and transformations (though it did require deep insight in various Eclipse technologies and MDD concepts). However, it has several limitations, and it only addresses a small subset of business processes that may be modeled. It requires much more effort to make it a production ready tool that can be used by the organization. Despite a promising prototype, our guess is that only a very limited number of enterprises will go the way and implement their own tools. While it may be economical beneficial to develop ones own tools, there might be political reasons not to do so. To answer the research question set up in the introduction we can now say, “Defining and developing a model driven development tool to support an enterprise specific business process development process seems promising. It will heighten the productivity of development teams and probably cause fewer errors in implementations. However, it requires a high degree of expertise in model driven development methodology and technology to develop such a tool. It will probably be unachievable for most enterprises” Although language workbenches provide huge support in development of model driven development tools, it should be much easier to customize ones own BPM languages and tools. For future research we therefore suggest to work on
862
tool-based frameworks that feature extensions of predefined BPM languages, editors and visualizations to a specific enterprise. It would require less investment and it would be easer for an enterprise without experienced tool developers to customize BPM tools instead of develop them from scratch.
rEFErENcEs Bézivin, J., Hammoudi, S., Lopes, D., & Jouault, F. (2004). An experiment in mapping Web services to implementation platforms (Tech. Rep. LINA, University of Nantes). Booch, G., Brown, A., Iyengar, S., Rumbaugh, J., & Selic, B. (2004). An MDA Manifesto. Business Process Trends - MDA Journal. Retrieved from http://www.bptrends.com/publicationfiles/05-04%20COL%20IBM%20Manifesto%20 -%20Frankel%20-3.pdf Bordbar, B., & Staikopoulos, A. (2004a). Modelling and transforming the behavioural aspects of web services. In Third Workshop in Software Model Engineering (WiSME2004) at UML, Lisbon, Portugal. Bordbar, B., & Staikopoulos, A. (2004b). on behavioural model transformation in Web services. In Conceptual Modelling for Advanced Application Domain (eCOMO) (pp. 667-678). Shanghai, China. BPEL. (2003). Business process execution language for Web services (BPEL4WS). Version 1.1.Retrieved from http://www-128.ibm.com/ developerworks/library/specification/wsbpel Brahe, S. (2007). BPM on top of SOA: Experiences from the financial industry. In G. Alonso, P. Dadam, & M. Rosemann (Eds.), BPM2007 (LNCS 4714, pp. 96-111).
Enterprise Specific BPM Languages and Tools
Brahe, S. (2008). An experiment on creating enterprise specific BPM languages and tools (Tech. Rep. ITU-TR-2008-102). IT University of Copenhagen. Brahe, S., & Bordbar, B. (2006). A pattern-based approach to business process modeling and implementation in Web services. In D. Georgakopoulos (Ed.), ICSOC 2006 (LNCS 4652, pp. 161-172). Brahe, S., & Østerbye, K. (2006). Business process modeling: Defining domain specific modeling languages by use of UML profiles. In A. Rensink & J. Warmer (Eds.), ECMDA-FA 2006 (LNCS 4066, pp. 241-255). Budinsky, F., Steinberg, D., Merks, E., Ellersick, R., & Grose, T. J. (2003). Eclipse Modeling Framework: A Developer’s Guide. Addison Wesley. Dumas, M., & Hofstede, A. H. M. (2001). UML activity diagrams as a workflow specification language. In UML 2001 (LNCS 2185, pp. 76-90). Dumas, M., van der Aalst, W., & Hofstede, A. (2005). Process-aware information systems: bridging people and software through process technology. John Wiley & Sons, Inc. Eclipse (2008). The Eclipse project. Retrieved from http://www.eclipse.org Erl, T. (2005). Service oriented architecture: Concepts, technology and design. Prentice Hall. Flynn, D., Vagner, J., & Vecchio, O. D. (1995). Is CASE technology improving quality and productivity in software development? Logistics Information Management, 8(2), 8–21. doi:10.1108/09576059510084966 Fowler, M. (2005). Language workbenches: The killer-app for domain specific languages? Retrieved from http://martinfowler.com/articles/ languageWorkbench.html.
Frankel, D. S. (2003). Model driven architecture: Applying MDA to enterprise computing. OMG Press. GMF. (2008). Graphical Modeling Framework project. Retrieved from http://www.eclipse.org/ gmf. Guntama, E., Chang, E., Jayaratna, N., & Pudhota, L. (2003). Extension of activity diagrams for flexible business workflow modeling. International Journal of Computer Systems Science & Engineering, 18(3), 137–152. Jablonski, S., & Bussler, C. (1996). Workflow management - Modeling concepts, architecture and implementation. London: Intl. Thomson Computer Press. Jablonski, S., & Götz, M. (2007). Perspective oriented business process visualization. In 3rd International Workshop on Business Process Design (BPD) in conjunction with the 5th International Conference on Business Process Management (BPM 2007). Brisbane, Australia. Koehler, J., Hauser, R., Kapoor, S., Wu, F. Y., & Kumaran, S. (2003). A Model-driven transformation method. In 7th International Enterprise Distributed Object Computing Conference (EDOC 2003) (pp. 186-197). Koehler, J., Hauser, R., Sendall, S., & Wahler, M. (2005). Declarative techniques for model-driven business process integration. IBM Systems Journal, 44(1), 47–65. Kroll, P., & Kruchten, P. (2003). The rational unified process made easy. In A Practitioner’s Guide to the RUP. Addison Wesley.
863
Enterprise Specific BPM Languages and Tools
Ledeczi, A., Maroti, M., Bakay, A., Karsai, G., Garrett, J., Thomason, C., et al. (2001). The generic modeling environment. In Workshop on Intelligent Signal Processing. Budapest, Hungary. Retrieved from http://www.isis.vanderbilt.edu/ Projects/gme/GME2000 Overview.pdf. Leymann, F., & Roller, D. (2000). Production workflow: Concepts and techniques. Prentice Hall. List, B., & Korherr, B. (2005). A UML 2 profile for business process modelling. In Perspectives in Conceptual Modeling, ER 2005 Workshops (LNCS 3770, pp. 85-96). MDAGuide. (2003). MDA Guide Version 1.0.1. Retrieved from http://www.omg.org/docs/ omg/03-06-01.pdf. Mernik, M., Heering, J., & Sloane, A. M. (2005). When and how to develop domain-specific languages. ACM Computing Surveys, 37(4), 316–344. doi:10.1145/1118890.1118892 Murata, T. (1989). Petri nets: Properties, analysis and applications. Proceedings of the IEEE, 77(4), 541–580. doi:10.1109/5.24143 oAW. openArchitectureWare. Retrieved from http://www.openarchitectureware.org. Skogan, D., Grønmo, R., & Solheim, I. (2004). Web service composition in UML. In Eighth IEEE International Enterprise Distributed Object Computing Conference (EDOC’04) (pp. 47-57). Stahl, T., Völter, M., Bettin, J., Haase, A., & Helsen, S. (2006). Model-driven software development: technology, engineering, management. Wiley. Swithinbank, P., Chessell, M., Gardner, T., Griffin, C., Man, J., Wylie, H., & Yusuf, L. (2005). Patterns: model-driven development using IBM rational software architect. IBM Redbooks. Available at http://www.redbooks.ibm.com/abstracts/ sg247105.html?Open.
864
Tolvanen, J.-P., & Rossi, M. (2003). MetaEdit+: Defining and using domain-specific modeling languages and code generators. In OOPSLA ’03: Companion of the 18th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications (pp. 92-93). New York: ACM. van der Aalst, W. M. P., Hofstede, A. H. M., Kiepuszewski, B., & Barros, A. P. (2003). Workflow patterns. Distributed and Parallel Databases, 14(1), 5–51. doi:10.1023/A:1022883727209 White, S. (2006). Business process modeling notation (Version 1.0). Available at http://www. bpmn.org/Documents/OMG-02-01.pdf. Windsor, J. (1986). Are automated tools changing systems analysis and design? Journal of Systems Management, 37(11), 28–33.
KEy tErMs AND DEFINItIONs BPM Tools: A collection of modeling and implementation tools specialized for modeling a business process and implement it as a workflow in a workflow language. Control Flow Behavior: A physical model of a business process. It specifies how the process should be implemented in BPEL. Domain Specific Language: (DSL) A specialized programming or a modeling language that allows expressing solutions directly in concepts of a problem domain. Eventmap: A model of all business events that may occur in a given business context. Model Driven Development: A development paradigm that focuses on using models in software development. Models are used for analysis, simulation, verification and code generation Model Transformation: A model transformation takes one or several source models and generates one or several target models, or tex-
Enterprise Specific BPM Languages and Tools
tural documents. It is based on a transformation definition that specifies how to map elements in the source DSLs to elements in the target DSLs Solution Process Flow: A logical or conceptual model of a business process. It specifies business logic for one business event.
Workflow Specification: A document that describes additional information required to implement a Solution Process Flow model in Danske Banks extended BPM system.
This work was previously published in Handbook of Research on Complex Dynamic Process Management: Techniques for Adaptability in Turbulent Environments, edited by Minhong Wang and Zhaohao Sun, pp. 23-56, copyright 2010 by Information Science Reference (an imprint of IGI Global).
865
866
Chapter 3.20
Semantic Business Process Mining of SAP Transactions Jon Espen Ingvaldsen The Norwegian University of Science and Technology, Norway Jon Atle Gulla The Norwegian University of Science and Technology, Norway
AbstrAct This chapter introduces semantic business process mining of SAP transaction logs. SAP systems are promising domains for semantic process mining as they contain transaction logs that are linked to large amounts of structured data. A challenge with process mining these transaction logs is that the core of SAP systems was not originally designed from the business process management perspective. The business process layer was added later without full rearrangement of the system. As a result, system logs produced by SAP are not process-based, but transaction-based. This means that the system does not produce traces of process instances that are needed for process mining. In this chapter, we show how data available in SAP systems can enrich process instance logs with ontologically structured concepts, and evaluate techniques for mapping DOI: 10.4018/978-1-60566-669-3.ch017
executed transaction sequences with predefined process hierarchies.
INtrODUctION To describe the current situation in dynamic business process environments we need tools that can assist rapid modeling. Process mining tools meet this requirement by extracting descriptive models from event logs in the underlying IT-systems to construct the business process descriptions from actual data. SAP systems are promising domains for process mining. SAP is the most widely used Enterprise Resource Planning (ERP) system with a total market share of 27 percent worldwide in 2006 (Pang, 2007). Even though there may be blue print models defined for how the systems should support organizational business processes, there are often gaps between how the systems are planned to be used and how the employees actually carry out the operations.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Semantic Business Process Mining of SAP Transactions
The magnitude of data sources in a running ERP system is large, and within SAP there are several event and transaction logs that can be analyzed with process mining. In this process mining work, we use transaction data that describe document dependencies between executed transactions. A transaction in a SAP system can be viewed a small application. An example of a transaction is “ME51 – Create Purchase Requisition”. As the name indicates, this transaction enables a user to create a purchase requisition. “ME51” is the unique identifier for this transaction, called the transaction code. Such a transaction would produce a purchase requisition, which further can be referred to by a purchase order created in another transaction, like “ME21 – Create Purchase Order”. By tracing such document dependencies, we are able to extract transaction sequences that can be explored and analyzed with use of process mining. Data in the underlying databases of SAP systems contain • • •
Transactional data – Daily operations, such as sales orders and invoices. Master data – Business entities such as customers, vendors and users. Ontological data – Metadata for interpretation and structuring of instances.
The transactional data are the basis building blocks for process mining analysis and describe events that are carried out. In the transactional data we typically find execution timestamps and relations to involved master data sources. The ontological data in SAP databases can be used to extract descriptions of the transactions and related entities. For instance, in the SAP database there are table structures that contain full text descriptions of transactions and business processes and their internal relationships. Construction and maintenance of ontologies is work-intensive and has so far been a bottleneck to realization of many semantical technologies.
Ontologies tend to grow huge and complex, and both domain expertise and ontology modeling expertise are needed in ontology engineering (Gulla, 2006). In the underlying databases of SAP systems there are lots of structured data that can be extracted to form and populate ontologies. In semantic business process mining of SAP transactions, we can exploit available data structures to limit the extent of ontology engineering work. One particular challenge with process mining of SAP transactions is the many-to-many relationship between transactions and defined business processes. Figure 1 shows an example from the business process hierarchy in SAP. In SAP systems, business processes are defined at four levels, “Enterprise Area”, “Scenario”, “Group” and “Business Process”. At the second lowest level, Figure 1 shows two business processes, “Subsequent debit for empties and returnable packaging” and “Sales activity processing (standard)”. As shown in the hierarchy, both of these business processes can involve the transaction “V+01: Create Sales Call”. The transaction logs in SAP systems contain no information about business process context. If we do process mining on transaction logs where “V+01: Create Sales Call” occurs, there is no available data that explicitly states whether this transaction was carried out in the context of “Subsequent debit for empties and returnable packaging”, “Sales activity processing (standard)” or another business process. Transaction sequences themselves can be used as input to process mining algorithm to extract flow models and performance indicators. However, if we could map the executed transactions precisely to concepts in the defined business process hierarchies, we would be able to extract business process models with aggregated levels, and relate performance indicators to higher level process definitions. In this chapter we will show how transaction sequences extracted from SAP systems can be enriched with relations to ontological concepts and we will evaluate three techniques for mapping
867
Semantic Business Process Mining of SAP Transactions
Figure 1. Example of business process hierarchies
executed transactions with the standard business process hierarchies in SAP.
bAcKGrOUND In the past year, Semantic Web technology has gained a substantial interest from Business Process Management (BPM) research. Traditional process mining has successfully been shown to extract flow models that describe how activities and organizational units depend on each other in dynamic business process environment. Semantic business process mining (SBPM) takes advantages of the rich knowledge expressed in ontologies and associated process instance data and extracts semantic models that enable reasoning in a wider context than traditional process mining. SBPM has been proposed as an extension of BPM with semantic web and semantic web service technologies in order to increase and enhance the level of automation that can be achieved within the BPM life-cycle (Alves de Medeiros, 2007; Ma 2007).
868
Contemporary information systems (e.g., WFM, ERP, CRM, SCM, and B2B systems) record business events in so-called event logs. Process mining takes these logs to discover process, control, data, organizational, and social structures (van der Aalst, 2007). Within the BPM life-cycle, process mining can be applied to gather knowledge about the past and find potentials for change and optimization. Approaches for semantic process mining have been proposed by incorporating ontologies, references from elements in event logs to ontological concepts and ontological reasoners. Ontologies define the set of shared concepts necessary for the analysis, and formalize their relationships and properties. The references associate meanings to syntactical labels in event by pointing to defined ontology concepts. The reasoner supports reasoning over the ontologies in order to derive new knowledge, e.g., subsumption, equivalence, etc. (Alves de Medeiros, 2008). Alves de Medeiros (2007) point out two important potentials for leveraging process mining
Semantic Business Process Mining of SAP Transactions
with a conceptual layer:
•
1.
• • •
2.
To make use of the ontological annotations in logs and models to develop more robust process mining techniques that analyze on conceptual levels. Use process mining techniques to discover or enhance ontologies based on the data in the event logs.
Pedrinaci (2008) argue that business process analysis activities require semantic information that spans business, process and IT levels and is easily retrieved from event logs. They have developed the Events Ontology and the Process Mining Ontology that aim to support the analysis of enacted processes at different levels of abstraction spanning from fine grain technical details to coarse grain aspects at the business level. ProM is an open source process mining framework that is built up of plug-ins that targets different process mining analysis (van der Aalst, 2007). Already, there are developed plug-ins and input format for ProM that targets ontological reasoning. The Semantically Annotated Mining eXtensible Markup Language (SA-MXML) format is a semantic annotated version of the MXML input format used by the ProM framework. In short, the SA-MXML format opens for linking elements in logs to concepts in ontologies. The Semantic LTL Checker is a ProM plug-in that allows for semantic verification of properties in SA-MXML logs (Alves de Medeiros, 2008).
EXtrActION OF sEMANtIcALLy ENrIcHED trANsActION sEQUENcEs SAP systems contain data structures (typically hierarchical) that can be exported to form and populate ontologies. Such data structures include:
Organizational structures: Employees, roles, departments, controlling areas, etc. Geographical structures Material and product groupings Business process compositions
In some cases these structures can be utilized directly as complete domain ontologies, while in other cases they require manual processing before they can be used as complete domain ontologies or as input to populate general or incomplete ontologies. Extraction of long transaction sequences from SAP systems is a task that involves large amounts of data and multiple database tables. EVS ModelBuilder is a tool that is designed to support extract entries from SAP systems that are suitable for process mining. The tool is created in cooperation between Businesscape AS and the Information Systems group at the Norwegian University of Science and Technology. In EVS ModelBuilder, the user can describe on a typelevel how business object and events are related. Based on these descriptions, the program carry out necessary database queries, merge data sources and exports transaction logs that can be processed and explored by analysis tools like ProM. More details about how this tool can support the preprocessing phase of process mining projects are described in (Ingvaldsen, 2007).
POPULAtING ONtOLOGIEs FrOM AVAILAbLE DAtA strUctUrEs We will use a simple example to show the magnitude of structured context information available in the transaction logs in SAP systems. Figure 2 shows three SAP tables that can be involved in a semantic process mining project. The EKKO table contains data describing purchase order headers. Entries in this table contain references to a vendor and the user that created the purchase order. Both the user and the vendor are described
869
Semantic Business Process Mining of SAP Transactions
Figure 2. Three SAP tables that are used to describe events related to the creation of purchase orders.
more in detail in separate database tables. USR03 is one of the tables that describe SAP users in detail. It contains the full name of the user and describes which department the user belongs to. LFA1 include vendor details such as their location, specified by country, district and city. Based on data found in the EKKO table, we are able to extract events for creations of purchase orders. Data in the two other tables can be used to populate two distinct ontologies; one describing the breakdown of the company into departments and employees, and one describing how countries, districts, cities and vendors relate to each other. Figure 3 shows how our example data can describe events in detail and relate them to structured ontological data. In both of the ontologies,
human knowledge is used to structure and place different concepts into hierarchical levels, but the population of ontological concepts are done by use of data available in the SAP tables. In real systems, the three tables in figure 2 involve many more attributes that describe the entity properties. To limit the extent of this example, we have only involved a subset of available attributes. By including other attributes and related tables into this simple process mining example, we can enrich the event further with context details and link the event to other events like creation of purchase requisition and receipt of order confirmations and goods. In SA-MXML, the event information and ontologies are is stored in separate files. The event
Figure 3. An example of a “Create purchase order” event that is related to two ontological concepts
870
Semantic Business Process Mining of SAP Transactions
log files describe the process instances that are carried out, the work items they consist of, relations to aggregated process definitions, involved users, and execution timestamps. These elements can further point to ontological concepts by use of Uniform Resource Identifiers (URIs) .
MAPPING EXEcUtED trANsActIONs tO DEFINED bUsINEss PrOcEssEs Much process mining work assumes that event logs extracted from the IT systems contain relations between the executed events and higher level business process definitions. Such information is very helpful if we want to do delta analysis. That is, comparing the actual processes with predefined process models. By comparing the predefined process models with models discovered by use of process mining, discrepancies between them can be detected and used to improve the processes (van der Aalst, 2005). By linking entries in SAP transaction logs to aggregated business process definitions we can lift the models and performance indicators that we discover from process mining from a somewhat system technical level to higher business levels. Predefined business process hierarchies are available and serialized in database tables in SAP systems. As shown in Figure 1, this hierarchical structure consists of four levels. An enterprise area depicts a business structure, which represent a homogeneous unit in the sense of process-oriented structuring. Examples of enterprise process areas are enterprise planning, production and sales. Business scenarios are assigned to a particular enterprise process area, and describe on an abstract level the logical business flow across different application areas involving the processes. Process groups contain individual processes that are bundled such that they can be visualized more easily. Processes describe the smallest selfcontained business sequences and represent the
possibilities within its transactions, where detailed functions are carried out. Examples of business scenarios, process groups and processes for the sales enterprise area are shown in figure 1. In this paper we will focus on the sales enterprise area for evaluating techniques for mapping entries in transaction logs to defined business processes. The sales enterprise area deals with the tasks of performance utilization and thus organizes the business relationships in the market. The task of this enterprise area is to provide customers with the goods produced in the enterprise, or with financial or other services offered by the enterprise. This includes the planning and control of distribution channels, from advertisement, inquiry and quotation processing, sales order processing, delivery processing, invoicing, and down to checking of incoming payments (Keller, 1998). A process can be carried out by completing one or more transactions. Figure 4a shows how many processes transactions are involved in within the enterprise area “sales”. Most transactions are only involved in one single process, and in average a transaction is involved in 1.55 processes. This means that for very many of the transactions in a transaction log we can identify the correct process easily. On the other hand, there are a significant number of transactions that occur in many processes. For the enterprise area “sales” some transactions occur in up to eight processes. These transactions can be seen as the most common transactions, and these are the transactions we find most frequently in the transaction logs. For such transactions, it is much more challenging to identify the correct process for a given execution context. Figure 4b shows that the number of transactions a process can involve vary significantly. In total the sales enterprise area consists of 59 standard processes, and in average a process involves 10.92 transactions. Most processes contain only a handful of transactions, but there are processes that can involve close to a hundred transactions.
871
Semantic Business Process Mining of SAP Transactions
Figure 4. a) Distribution of process occurrences per transaction within sales. b) Number of transaction per process
Mapping Approaches A transaction sequence is a list of executed and related transactions that are ordered by their execution timestamp. They are related in the sense that they produce and consume the same set of resources. I.e., an execution of the transaction “ME51, Create Purchase Requisition” produces a purchase requisition that can be consumed (referred to) by the transaction “ME21, Create Purchase Order”. Figure 5 shows a transaction sequence with seven entries and a hierarchy with defined business processes. From the transaction logs in SAP systems, we can extract transaction sequences as shown at the bottom of the figure, but the mapping to business process definition is not explicitly stated in the data. Transaction sequences typically span multiple process boundaries, and therefore each transaction in the sequence must be mapped to the correct business process context. Although the relations between transaction executions and defined business processes are not explicitly stated in the transaction logs there are approaches for identifying likely mappings. We will propose and evaluate three such approaches:
872
Simple lookup, search and indexing of processes, and graph operations. Simple lookup is the naïve approach of searching for processes that contains the transaction execution we want to enrich with business process context. In cases where several processes contain the given transaction the one with the least number of transactions is selected as the correct business process context. If we use simple lookup to identify the business process context for the transaction “VL01: Create Delivery”, we would search for all processes where this transaction occurs. Such a query would result in the following process candidates from the sales enterprise are: “Batch search strategy processing (standard)”, “Delivery for returnable packaging subsequent debit”,” Delivery for returns”, and ” Delivery processing”. Here, the first process candidate would be as this contains the least number of transactions. Search and indexing of processes is a slightly more sophisticated approach where the defined processes are indexed by their involved transactions in a search index. Then, instead of just using a single transaction to form a search query, we also include the other entries of the transaction sequence. The idea behind this approach is that by
Semantic Business Process Mining of SAP Transactions
Figure 5. A transaction sequence where each entry is mapped to a defined business process.
incorporating neighbor transactions of a transaction we have more information about the execution context and are more likely of identifying a unique and correct business process definition. Figure 1, shows an example of two processes that both include the transaction “V+01: Create Sales Call”. If we have a transaction sequence where this transaction, “VC02: Change Sales Activity” and “VC03: Display Sales Activity” occurs together, we can include all the three transactions to form a query and retrieve a business process context. In a Lucene1 search environment, such a query could be expressed as +”V+01” “VC02” “VC03”. The plus sign states that the following transaction code is required to exist in the result set entries, and this plus sign is used in front of the transaction code that we want to map with a business process definition. The other query terms express optional
transaction codes that are included to describe the specific execution context more in detail. As the last two transactions are only present in the process “Sales activity processing (standard)” the other process, “Subsequent debit for empties and returnable packaging”, would not be considered as a candidate process for this execution of “V+01: Create Sales Call”. Graph operations are an alternative approach that views the set of candidate solutions as a graph with nodes and edges. The problem that motivated this approach is that several standard SAP processes contains exactly the same set of potential transactions. For instance, “Master transfer for contact documents”, “Message transfer for billing documents”, “Message transfer for sales documents”, and ”Message transfer for supplies” are all processes that involve exactly the same set
873
Semantic Business Process Mining of SAP Transactions
of transactions: “VL14: Mail control decentralized shipping”, “VL20: Display Communication Document”, “VL70: Output From Picking Lists”, “VPAK: Packing list”, and “VT70: Output for Shipments”. Neither simple lookup or search and indexing of processes would be able to distinguish and select one such process as they contain exactly the same set of transactions. However, in longer transaction sequence typically spans several processes, and a process, like “Master transfer for contact documents”, is often just one of several processes. This process is more likely to be the correct process if the other transactions in the same transaction sequence are within the same process group, process scenario or at least enterprise area. For each entry in a transaction sequence, this approach uses simple lookup to retrieve sets of solution candidates. Then, Dijkstras shortest path algorithm is used find the shortest distance through the business process hierarchy between the set of process candidates and the transactions in the actual transaction sequence. For a given transaction, the business process with the lowest average distance to all entries in the actual transaction sequence is selected as the right candidate.
Evaluation To evaluate the three approaches we constructed ten transaction sequences spanning alternative routes through the following sales processes: Sales Quotation Processing, Sales Order Processing, Delivery Processing, Goods Issue Processing, and Billing Processing. The sequence between these processes and the transactions they can involve are shown in figure 6. This set of possible transactions is used together with process definitions in (Keller, 1998) to construct transaction sequences for testing that are realistic and annotated with correct business process context. In total 12 test transaction sequences was constructed. To make them realistic with respect to typical process mining logs, the constructed transaction sequences have a lot of variation in length, and some are incomplete with respect to describe end-to-end processes. In process mining, extracted transaction logs typically contain transactions carried out within a certain time frame. Entries that are close to the start or end of are frequently incomplete as only parts of their history overlaps with the selected time frame. Cancelled processes lead also to incomplete transaction sequences.
Figure 6. Typical sales processes and their transactions. The process sequences are read from left to right.
874
Semantic Business Process Mining of SAP Transactions
Figure 7. Test set of ten transaction sequences where each entry is labeled with a correct business process.
The test transaction sequences and their business process annotations are shown in Figure 7. Figure 8 shows how to which extent each approach is able to identify the correct mapping for the twelve test sequences. In average search and
indexing of processes identify the correct mapping in 45% of the test cases. For simple lookup and graph operations the average scores are 42% and 31% respectively. In four of the test cases they have the same score.
Figure 8. Percentage scores for how many correct business process mappings the three approaches are able to identify. The scores are shown as bars for each of the 12 tests.
875
Semantic Business Process Mining of SAP Transactions
The graph operations approach has the highest deviation in its overall performance. In several of the test sequences this approach identifies none of the mappings correctly. A reason for this might be that this approach can fall into local areas of the business process hierarchy where the shortest path criterion is optimized, but the identified mappings are distant from the correct solution. Search and process indexing outperforms the simple lookup approach in two of the test cases. In both of these the search and process indexing finds the correct mapping because the query contains context information that enable the system to limit the solution space and eliminate false candidates. All of the three approaches could be modified and tuned to improve the performance. In the evaluations all approaches describe the context by including a varying number of transaction codes. These descriptions could also be extended with information like involved documents, users, vendors, products, geographical locations, etc. As we get a better and more complete picture of context around the execution of a transaction we can limit our solution space further and increase the probability of identifying correct mappings. However, such systems would also require manual labeling of training sets or search indices where such context information is related to the defined business process hierarchies. Another alternative for improving the mapping between transaction executions in the transaction logs with defined business processes is to combine the method. Search and process indexing or simple lookup could be chosen to the main strategy for suggesting the right business process, and in those cases where the business process candidates contain the same set of transactions the graph operations approach could be applied to make a suggestion.
876
FUtUrE trENDs In the last years there has been a consolidation between ERP and Business Intelligence (BI) systems. The largest ERP vendors, SAP and Oracle, have acquired the two BI vendors Business Objects and Hyperion and integrate their solutions into their customer offerings. Now, ERP does not only provide a shared data source for various organizational units, but also a valuable data source that companies can utilize to extract knowledge and competitive advantages. Software vendors that focus on supporting the Business Process Management (BPM) lifecycle, like IDS Scheer, have already shipped products that borrow elements from process mining. The ARIS Process Performance Manager enables companies to relate performance indicators to real business flows. As ERP solutions also are moving into directions of Service Oriented Architectures (SOA) and (BPM), we believe that future business applications in the ERP and BI area will focus more on analysis of business flows. This will create need for information systems that create event logs that contains a lot of structured context information and relations to higher level process definitions. In this paper we have focused on showing how elements in the event logs can be linked to ontological concepts. In addition, event logs can also be enriched with numerical and date attributes. For a purchase process, events can be enriched with order amounts, values, expected delivery dates, and so on. With such information available in the event logs we can not only describe how dynamic business processes are executed, how the loads are distributed and where most of the time is consumed, but also to which extent the actual processes are meeting expectations. As shown throughout this paper the amount of information that can be related to event logs is enormous, and there are large potentials for merging elements from process mining, data
Semantic Business Process Mining of SAP Transactions
mining and ontological reasoning. Process mining is used to find out how people and systems work. Ontological reasoning provides answers to test hypotheses, and data mining can extract descriptive and predictive models that support the whole picture. If the core of ERP systems was originally designed for business process management and monitoring of process instances we would not need techniques to identify mappings between historical transaction executions and defined business processes. Traces of process instance information in transaction logs would create valuable opportunities for using process mining to describe the real business flow and measuring the deviations against business process definitions and procedures. However, as large software vendors needs to be backwards compatible with older systems and customers, it is difficult to do major modification of the kernel design. Until traces of process instance information are provided in the event log structures, a process miner needs tools that can assist the mapping of transaction executions and defined business processes.
cONcLUsION Transaction logs in SAP systems contains substantial amount of context information that can be utilized to create references from the execution instances to ontological concepts. Also, SAP databases contain ontological data that describe relations between concepts involved in transaction executions. This availability of context information and ontologically structured concepts reduce manual ontology engineering work and make SAP systems a promising arena for semantic process mining. General ontologies, like the Process Mining Ontology, are important for semantic interpretation of common process mining terms. However, customized domain ontologies are also important for being able to reason over enterprise specific
concepts. As the reusability of such enterprise specific ontologies are low, tools that utilize available data to assist or automate parts of the ontology engineering process is of great value. In this paper we have shown how available data structures in SAP systems can be utilized to populate ontologies and construct transaction logs for semantic business process mining. We have in particular focused on the challenge of making use of the business process hierarchy definitions available in SAP systems. Three approaches for mapping entries in transaction sequences with predefined business process hierarchies are evaluated, and the results from these evaluations shows that it is difficult to completely automate the process of identify such mappings. By including a more complete information picture around the execution of transactions, we can limit the solution spaces and increase the probability of identifying correct business process mappings. Correct mappings between transaction log entries and defined business processes, enable process mining techniques to construct models that are lifted from a somewhat system technical transaction flow focus up to aggregated levels that describe higher level business terms. This makes process mining models valuable both for IT and business people.
rEFErENcEs Alves de Medeiros, A. K., Pedrinaci, C., van der Aalst, W. M. P., Domingue, J., Song, M., Rozinat, A., et al. (2007). An outlook on semantic business process mining and monitoring. In OTM 2007 Workshops (pp. 1244-1255). New York: Springer. Alves de Medeiros, A. K., van der Aalst, W. M. P., & Pedrinaci, C. (2008). Semantic process mining tools: Core building blocks. Paper presented at the 16th European Conference on Information Systems, Galway, Ireland.
877
Semantic Business Process Mining of SAP Transactions
Gulla, J. A., Borch, H. O., & Ingvaldsen, J. E. (2006). Unsupervised keyphrase extraction for search ontologies. In Natural Language Processing and Information Systems (pp. 25-36). New York: Springer. Ingvaldsen, J. E., & Gulla, J. A. (2007). Preprocessing support for large scale process mining of SAP transactions. In Business process management workshops (pp. 30-41). New York: Springer. Keller, G., & Teufel, T. (1998). SAP R/3 process oriented implementation. Reading, MA: AddisonWesley. Ma, Z., Wetzstein, B., Heymans, S., & Anicic, D. (2007). Semantic business process repository. In Proceedings of the International Workshop on Semantic Business Process Management (SBPM 2007), CEUR Proceedings. Pang, C., Eschinger, C., Dharmasthira, Y., & Motoyoshi, K. (2007). Market share: ERP software, worldwide, 2006. Gartner report. Pedrinaci, C., & Domingue, J. (2007).Towards an ontology for process monitoring and mining. In Proceedings of the Workshop on Semantic Business Process and Product Lifecycle Management (SBPM-2007), CEUR-WS van der Aalst, W. M. P., Reijers, H. A., Weijters, A. J. M. M., van Dongen, B. F., Alves de Medeiros, A. K., Song, M., & Verbeek, H. M. W. (2007). Business process mining: An industrial application. Information Systems, 32(1), 713–732. doi:10.1016/j.is.2006.05.003 van der Aalst, W. M. P., van Dongen, B. F., Günther, C. W., Mans, R. S., Alves de Medeiros, A. K., Rozinat, A., et al. Verbeek, H. M. W., & Weijters, A. J. M. M. (2007). ProM 4.0: Comprehensive support for real process analysis. In Petri Nets and Other Models of Concurrency – ICATPN 2007 (pp. 484-494). New York: Springer.
van der Aalst, W. M. P., & Weijters, A. J. M. M. (2005). Process mining. In M. Dumas, W.M.P. van der Aalst & A.H.M ter Hofstede (Ed.), Process aware information systems (pp. 235-255). Wiley Interscience
KEy tErMs AND DEFINItIONs Process Mining: Research area that aims at creating tools for discovering process, control, data, organizational and social structures from event logs. SAP Systems: ERP solutions delivered by SAP for large organizations. Semantic Business Process Management (SBPM): An extension of BPM with semantic web and semantic web service technologies in order to increase and enhance the level of automation that can be achieved within the BPM life-cycle Transaction: A small application within SAP systems that have a unique transaction code. Transaction Log: Data source in a transaction based information system that describes historical events. Transaction Sequence: An ordered chain of events that describe transaction carried out, how they depend on each, when they were executed, and relations to involved entities like users, vendors, products, etc.
ENDNOtE 1
Lucene is an open source information retrieval library, supported by the Apache Software Foundation. See: http://lucene. apache.org/
This work was previously published in Handbook of Research on Complex Dynamic Process Management: Techniques for Adaptability in Turbulent Environments, edited by Minhong Wang and Zhaohao Sun, pp. 416-429, copyright 2010 by Information Science Reference (an imprint of IGI Global). 878
879
Chapter 3.21
Mining Association Rules from XML Documents Laura Irina Rusu La Trobe University, Australia Wenny Rahayu La Trobe University, Australia David Taniar Monash University, Australia
Abstract This chapter presents some of the existing mining techniques for extracting association rules out of XML documents in the context of rapid changes in the Web knowledge discovery area. The initiative of this study was driven by the fast emergence of XML (eXtensible Markup Language) as a standard language for representing semistructured data and as a new standard of exchanging information between different applications. The data exchanged as XML documents become richer and richer every day, so the necessity to not only store these large volumes of XML data for later use, but to mine them as well to discover interesting information has became obvious. The hidden knowledge can DOI: 10.4018/978-1-60566-330-2.ch011
be used in various ways, for example, to decide on a business issue or to make predictions about future e-customer behaviour in a Web application. One type of knowledge that can be discovered in a collection of XML documents relates to association rules between parts of the document, and this chapter presents some of the top techniques for extracting them.
Introduction The amount of data stored in XML (eXtensible Markup Language) format or changed between fferent types of applications has been growing during the last few years, and more companies are considering XML now as a possible solution for their data-storage and data-exchange needs
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Mining Association Rules from XML Documents
(Laurent, Denilson, & Pierangelo, 2003). The first immediate problem for the researchers was how to represent the data contained in the old relational databases using this new format, so various techniques and methodologies have been developed to solve this problem. Next, the users realised that they not only required storing the data in a different way, which made it much easier to exchange data between various applications, but they required getting interesting knowledge out of the entire volume of XML data stored as well. The acquired knowledge might be successfully used in the decisional process to improve business outcomes. As a result, the need for developing new languages, tools, and algorithms to effectively manage and mine collections of XML documents became imperative. A large volume of work has been developed, and research is still pursued to get solutions that are as effective as possible. The general idea and goal for researchers is to discover more powerful XML mining algorithms that are able to find representative patterns in the data, achieve higher accuracy, and be more scalable on large sets of documents. The privacy issue in knowledge discovery is also a subject of great interest (Ashrafi, Taniar, & Smith, 2004a). XML mining includes both the mining of structures as well as the mining of content from XML documents (Nayak, 2005; Nayak, Witt, & Tonev, 2002). The mining of structure is seen as essentially mining the XML schema, and it includes intrastructure mining (concerned with mining the structure inside an XML document, where tasks of classification, clustering, or association rule discovering could be applied) and interstructure mining (concerned with mining the structures between XML documents, where the applicable tasks could be clustering schemas and defining hierarchies of schemas on the Web, and classification is applied with name spaces and URIs [uniform resource identifiers]). The mining of content consists of content analysis and structure clarification. While content analysis is
880
concerned with analysing texts within the XML document, structural clarification is concerned with determining similar documents based on their content (Nayak, 2005; Nayak et al., 2002). Discovering association rules is looking for those interesting relationships between elements appearing together in the XML document, which can be used to predict future behaviour of the document. To our knowledge, this chapter is the first work that aims to put together and study the existing techniques to perform the mining of association rules out from XML documents.
Background The starting point in developing algorithms and methodologies for mining XML documents was, naturally, the existing work done in the relational database mining area (Agrawal, Imielinski, & Swami, 1993; Agrawal & Srikant, 1998; Ashrafi, Taniar, & Smith, 2005; Ashrafi, 2004; Daly & Taniar, 2004; Tjioe & Taniar, 2005). In their attempt to apply various relational mining algorithms to the XML documents, researchers discovered that the approach could be a useful solution for mining small and not very complex XML documents, but not an efficient approach for mining large and complex documents with many levels of nesting. The XML format comes with the acclaimed extensibility that allows the change of structure, that is, adding, removing, and renaming nodes in the document according to the information necessary to be encoded in. Furthermore, using the XML representation, there are a lot of possibilities to express the same information (see Figure 1 for an example) not only between different XML documents, but inside the same document as well (Rusu, Rahayu, & Taniar, 2005a). In a relational database, it is not efficient to have multiple tables to represent the same data with different field names, types, and relationships as the constraints and table structures are defined at the design time. In an opposite manner, a new
Mining Association Rules from XML Documents
Figure 1. Different formats to express the same information using the XML structure
XML document can be added to a collection of existing XML documents even though it represents the same type of data using a totally different structure and element names, that is, a different XML schema. As a result, researchers concluded that the logic of the relational mining techniques could be maintained, but they needed to assure that the steps of the existing algorithms were looking to the specific characteristics of the XML documents. Among other XML mining methods, association rule discovery, and the classification and clustering of XML documents have been the most studied as they have a high degree of usability in common user tasks or in Web applications. In this chapter, we present a number of techniques for mining association rules out of XML documents. We chose to analyse this particular type of mining because (a) it is, in our opinion, the most useful for general types of applications in which the user just wants to find interesting relationships in his or her data and wants help to make better business decisions, and (b) the techniques used are easy to understand, replicate, and apply for the common user who does not have a high degree of knowledge of mathematics or statistics, often required by some techniques for performing classification or clustering.
Overview of the Generic Association Rule Concepts The concept of association rules was first introduced by Agrawal (1993) for relational-database data to determine interesting rules that could be extracted from some data in a market basket analysis. The algorithm is known as the Apriori algorithm, and an example of an association rule extracted could be “If the user buys the product A, he or she will buy the product B as well, and this happens in more than 80% of transactions.” The generic terms and concepts related to the Apriori algorithm are as follows. If I represents the set of distinct items that need to be mined, let D be the set of transactions, where each transaction T from D is a set of distinct items T ⊆ I. An association rule R is an implication X→Y, where X, Y ⊂ I and X ∩ Y = ∅. The rule R has the support s in D if s% of transactions in D contain both X and Y, and the confidence c if c% of transactions in D that contain X also contain Y. If we use a freq(X,D) function to calculate the percentage of transactions in D that contain X, the support and confidence for the association rule R could be written as the following formulas:Support (X→Y) = freq (XUY, D) and Confidence (X→Y) = freq (XUY, D) / freq (X, D). The minimum support and minimum confidence are set at the beginning of the mining pro-
881
Mining Association Rules from XML Documents
cess, and it is compulsory that they are observed by the determined rules. In the Apriori algorithm, all the large k-itemsets are determined, starting from k=1 (itemsets with only one item) and looping through D (the set of all transactions) to calculate its support and the confidence. If they are not validated against the minimum required, the k-itemset is considered to be not large and is pruned. The algorithm assumes that any subset of items that is not large determines its parent (i.e., the itemset that contains it) to not be large, and this improves the speed of the process a lot. At the end, when all the large itemsets are found, association rules are determined from the set of large itemsets.
Overview of XML Association Rules For the XML documents, finding association rules means finding relationships between simple or complex elements in the document: in other words, finding relationships between substructures of the XML document. For example, in an XML document containing details of the staff members and students in a computer-science university department, including details of their research publications, an association rule could be “Those staff members who publish their papers with X publisher received an award, and this happens in 75% of cases.” Later in the chapter (see the section on Apriori-based approaches), we give some examples of how the generic concepts of transaction and item are perceived by the XML association rules. We will also show how the concepts of support and confidence are used by the presented approaches as they need to be correct with regard to the total number of XML transactions that need to be mined. Our analysis is split in two subsections based on the type of XML documents mined, that is, (a) static XML documents and (b) dynamic XML documents. Static XML documents contain data gathered for a specific period of time that do not change their content (for example, details about
882
purchases in a store for March 2005 and June 2005 might come as two separate static XML documents if the business process stores the data at the end of each month). Dynamic XML documents contain data that are continuously changing in time (an online bookstore, for example, will change its content, represented as an XML document, from one day to another, or even multiple times during the same day depending on the e-customers’ behaviour). Most of the work done in the area of mining association rules from static XML documents use classical algorithms based on the Apriori algorithm, described before in the overview section, while a number of non-Apriori-based approaches have been developed as well. In this chapter we will analyse at least one of each type of algorithms. In case of dynamic XML documents, the focus is on mining association rules out of historic versions of the documents or out of the effective set of changes extracted between two successive versions. The difference between two versions of the same XML document is named delta, and it can be (a) structural delta, when the difference between versions is done at the schema level, or (b) content delta, when the difference is calculated at the content level (Chen, Browmick, & Chia, 2004).
Discovering Association Rules from Static XML Documents As specified in the background section, some of the XML association rule mining techniques use the Apriori general algorithm (Agrawal et al., 1993; Agrawal & Srikant, 1998) as a starting point for developing new methodologies specific to the XML document format and extensibility, while completely different techniques have been developed as well. The following analysis is split in two subsections depending on the type of mining algorithm used, that is, (a) Apriori-based approaches and (b) non-Apriori-based approaches.
Mining Association Rules from XML Documents
Apriori-Based Approaches A first thing to do is to see how the generic concepts related to the association rules (mentioned in the previous section), that is, transactions and items, are mapped to the particular XML format. Even though most of the papers detailed further in the chapter (Braga, Campi, & Ceri, 2003; Braga, Campi, Klemettinen, & Lanzi, 2002; Braga, Campi, Ceri et al., 2002; Wan & Dobbie, 2003, 2004) do not give certain definitions for these concepts, we can determine their view on the matter by analysing the algorithms. If an XML document is seen as a tree (see the example in Figure 3), the set of transactions D will be a list of complex nodes formed by querying the XML document for a specific path, a single complex node will form a transaction, and the children of the transaction node will be the items. The main difference from the generic concepts is that, while a generic transaction contains only a limited number of items and is easier to quantify, one XML tree transaction can have a different number of items depending on the level of nesting of the document. A similar definition is given in Ding, Ricords, and Lumpkin (2003), but at a more general level; that is, all the nesting depths (paths) in an XML document are considered to be records starting with the root, so for any node in the document, each child
is viewed as a record relative to the other records at the same depth or with similar tags. A simple and direct method to mine association rules from an XML document by using XQuery (XQuery, 2005) was proposed by Wan and Dobbie (2003, 2004). Based on the fact that XQuery was introduced by W3C (World Wide Web Consortium) to enable XML data extraction and manipulation, the algorithm is actually an implementation of the Apriori algorithm’s phases using the XQuery Language. In Figure 2, we exemplify the algorithm on an XML document containing information about items purchased in a number of transactions in a store (Figure 2a). The algorithm loops through the XML document, generates the large itemsets in the “large.xml” document (Figure 2b), and then builds the association rule document (Figure 2c). For details on the XQuery code implementation of the apriori function and the other functions involved, we refer the reader to the original papers (Wan & Dobbie, 2003, 2004). The significance of this approach is that the authors demonstrated for the first time that XML data can be mined directly without the necessity of preprocessing the document (for example, mapping it to another format, such as a relational table, which would be easier to mine). The algorithm could work very well in case of XML documents with a very simple structure (as in our
Figure 2. Example of a direct association-rule mining algorithm using XQuery (Wan & Dobbie, 2003, 2004)
883
Mining Association Rules from XML Documents
Figure 3. Example of an XML document presented as a tree (research.xml) with the identified context, body, and head
example in Figure 2), but it is not very efficient for complex documents. Also, a major drawback, assumed by the authors, is that in the XQuery implementation, the first part of the algorithm, that is, discovering large itemsets (Figure 2b), is more expensive regarding time and processor performance than in other language implementations (e.g., in C++). This drawback is explained by the lack of update operations in XQuery: a large number of loops through the document is required in order to calculate the large itemsets. However, the algorithms promise a high speed when the update operations are finally implemented in XQuery. Other methodologies for discovering association rules from XML documents are proposed by Braga et al. (2003) and Braga, Campi, Ceri et al. (2002); they are also based on the Apriori algorithm as a starting point and mine the association rules in three major steps, that is, (a) preprocessing data, (b) extracting association rules, and (c) postprocessing association rules. In our opinion, due to the specific XML format, when many levels of nesting could appear inside of a document, simple loops and counts (as in Figure 2) are no longer possible, so the three-step approach seems to be more appropriate for mining various types of XML documents.
884
Preprocessing Phase At this stage, a lot of operations are done to prepare the XML document for extracting association rules. In the following, we discuss some important terms and concepts appearing during this step, noting that this phase is the most extended one because a proper identification of all the aspects involved in mining preparation will significantly reduce the amount of work during the other two phases (extracting and postprocessing rules). The concept of the context of the association rules refers to the part(s) of the XML documents that will be mined (similar to the generic concept of a set of transactions). Sometimes, we do not want to mine all of the information contained in an XML document, but only a part of it. For example, in an XML document containing university staff and student information (see Figure 3), we may want to find association rules among people appearing as coauthors. In this case, the identified context includes the multitude of nodes relating to publications, no matter if they belong to PhD students or professors. This means the algorithm will not consider the nodes or nodes as they are not relevant to the proposed rules to discover. Context selection refers to the user’s opportunity to define constraints on the set of transac-
Mining Association Rules from XML Documents
tions D relevant to the mining problem (Braga, Campi, Ceri et al., 2002). Referring again to our example (Figure 3), we may want to look for association rules considering all the authors in the document, but only for publications after the year 2000, so a constraint needs to be defined on the “year” attribute of each publication element (not visible in the graph, but existing in the original XML document). If we talk about an association rule as an implication X→ Y, X is the body of the rule and Y is the head of the association rule. The body and head are always defined with respect to the context of the rule as the support and the confidence will be calculated and relevant only with respect to the established context. In the XML association rule case, the body and the head will be, in fact, two different lists of nodes, that is, substructures of the context list of nodes; only nodes from these two lists will be considered to compose valid XML association rules. We exemplify the above described concepts, that is, context identification, context selection, and the head and body of the rules, by using the XMINE RULE operator (Braga et al., 2003) on the working example in Figure 3, that is, the “research.xml” document. We visually identify the mentioned concepts in Figure 4, which details the algorithm proposed by Braga et al. (2003) and Braga, Campi, Ceri, et al. (2002).
The working document is defined in the first line, then the context, body, and head areas are defined together with the minimum support and minimum confidence required for the rules. The WHERE clause allows constraint specification; in this example, only publications after 2000 will be included in the context of the operator. The XMINE RULE operator brings some improvements, which could not be solved by the direct association rule mining algorithm in one step which uses XQuery, described at the beginning of the section, as follows: •)>>
•)>>
•)>>
The context, body, and head of the operator can be as wide as necessary by specifying multiple areas of interest for them as parts of the XML document or even from different XML documents. When specifying the context, body, and head segments, a variable can be added to take some specific values that enhance the context selection facility. A GROUP clause can be added to allow the restructuring of the source data.
We exemplify how the first feature can be implemented using the same working example, that is, the “research.xml” document. Suppose we now want to determine rules between publishers and keywords, that is, to find which publishing companies are focusing on specific areas of re-
Figure 4. Mining association rules from an XML document using the XMINE RULE syntax
885
Mining Association Rules from XML Documents
Figure 5. The syntax of the XMINE RULE operator introduced by Braga et al. (2003)
search. See Figure 5 for a visual representation of the new body and head selections. The main difference from the one-step mining approach (Wan & Dobbie, 2003, 2004) is that the three-step algorithm (Braga et al., 2003; Braga, Campi, Ceri et al., 2002; Braga, Campi, Klemettinen et al., Klemettinen, 2002) does not work directly on the XML document all the way down to the phase of extracting the association rules; instead, the first phase, that is, preprocessing, has as a final output a relational binary table (R). The table is built as follows (the authors suggest the use of the Xalan, 2005, as an XPath interpreter in the actual implementation): (a) The fragments of the XML document specified in the context, body, and head are extracted and filtered by applying the constraints in the WHERE clause (in case one exists), (b) the XML fragments obtained by filtering the body and head will become columns in the relational table R, (c) the XML fragments obtained by filtering the context will become rows in the table R, and (d) by applying a contains function (which, for a given XML fragment x and an XML fragment y, returns 1 if x contains y, and 0 otherwise), the binary relational table R is obtained, which will be used during the rule-extraction step to determine binary association rules applicable to the XML document. The selection done during the preprocessing phase, by specifying the context, the body, and the head of the association rules, is considered by some researchers not generic enough (Ding et
886
al., 2003) because it limits from the beginning the possibility to find and extract other rules (involving other parts of the documents).
Extracting Association Rules For the one-step mining approach, Figure 2 exemplifies the XQuery implementation of the generic Apriori algorithm. It mainly performs the following steps. Starting from the 1-itemsets (i.e., itemsets with one single item), a k-itemset (k>1) is built by extending the (k-1)-itemset with a new item. For each itemset, the support is calculated as a percentage of the total number of transactions that contain all the items of the itemset. If the itemset is not frequent (large) enough (i.e., its support is less than the minimum support required), it will be removed (pruned), and the algorithm continues with the next itemset until all the large itemsets are determined. Before the calculation of an itemset’s support to decide on pruning or keeping it, the itemset is considered to be a candidate itemset (i.e., possibly large) if all its sub sets are large (i.e., observe the minimum support required). The association rules are determined from the largest itemsets extracted, and for each of them a confidence is calculated as follows: For a rule X→Y, its confidence is equal to the percentage of transactions containing X that also contain Y. In the three-step approaches presented in the previous subsection, after obtaining the binary table R in the preprocessing phase, any relational
Mining Association Rules from XML Documents
association rule algorithm can be applied (e.g., generic a priori) to get the relationship between the binary values in the table, which represent the existence of an XML fragment inside another XML fragment. The steps of the generic Apriori algorithm have been detailed in the previous paragraph. In the particular case of the binary matrix R, the rows of the matrix will be transactions to be mined by the algorithm. The binary knowledge extracted at this step will signify the simultaneous presence of fragments from the body or head in the selected context.
Postprocessing Phase After the extraction of the binary association rules from the relational table during the second step, they will be transformed back into XMLspecific representations of the discovered rules. We remember from the preprocessing step that the filtered XML fragments obtained by applying the body and head path queries on the XML document became columns in the table, while filtered XML fragments obtained by applying the context path queries became rows. Reversing the process, together with the new knowledge determined, that is, the association rules between the binary values, we get an XML structure in which each element has two attributes, support and confidence, and two child elements, and , where the fragments of the body and head participating in the rule are listed. An example of the result of applying the XMINE algorithm is presented in Figure 6, in which the following rules are given: “Author A → Author H has 85% support and 20% confidence” and “Author H and Author B → Author A has 70% support and 22% confidence.”
were based on the Apriori algorithm sequence. The main feature is that this framework (Feng, Dillon, Wiegand, & Chang, 2003) considers in more detail the specific format of the XML documents, that is, their possible representation as trees. We recall that at the beginning of the section on Apriori-based approaches, we proposed a translation of the terms transaction and item into some concepts more specific to XML association rule mining. The non-Apriori-based framework discussed in the current section proposes a different mapping of the above terms to tree-like structured XML documents. The work of Feng et al. (2003) aims to discover association rules from a collection of XML documents rather than from a single document, hence each XML document or tree corresponds to a database record (transaction), where each XML fragment (subtree) corresponds to an item in the transaction. In this context, the framework proposed intends to discover association rules among trees in XML documents rather than among simple-structured items. Each tree is named a tree-structured item and is a rooted, ordered tree having its nodes classified into (a) basic nodes with no edges emanating from them and (b) complex nodes, which are internal nodes with one or more edges emanating from them. In Figure 7 we present some of the concepts introduced to define the framework for mining XML association rules.
Figure 6. Example of XML association rules obtained by applying the XMINE RULE algorithm
Non-Apriori-Based Approach In this section, we present one framework for discovering association rules that is different from the earlier described approaches, which
887
Mining Association Rules from XML Documents
Figure 7. Example of two tree-structured items in the framework for mining XML association rules as proposed by Feng et al. (2003)
In Figure 7, there are two tree-structured items, the and elements, extracted from the order.xml example document (Feng et al., 2003), in which the nodes n1,1, n2,1, n2,2, and n2,3 are complex, while n1,2, n1,3, n1,4, n2,4, n2,5, and n2,6 are basic. The edges inside the trees are labeled depending on the type of relationship between the nodes. There are two types of labels attached to edges: ad (ancestor-descendant) and ea (element-attribute). In Figure 7, the edge that connects the PERSON with the Profession node is labeled ea because Profession is an attribute of the PERSON in the XML document. All the other edges are labeled ad as they represent connections between a parent node and a child node. There are three types of constraints that can be imposed on nodes and edges, as follows. 1. )>> Level constraints: If e is an ad relationship nsource→ntarget, Level (e)=m (m integer) means that ntarget is the mth descendant of the nsource. 2. )>> Adhesion constraints: If e is an ea relationship nsource→ntarget, Adhesion(e)=strong means that ntarget is a compulsory attribute of nsource, while Adhesion(e)=weak means that ntarget is an optional attribute of the nsource. 3. )>> Position constraints: They refer to the actual contextual position of the node among all the
888
nodes sharing the same parent. For example, in Figure 7, Posi(n2,4)=last() means the Title node with the Star War Game content is the title of the last ordered CD. In this framework, a well-formed tree is a tree that observes three conditions: (a) It has a unique root node, (b) for any chosen edge in the tree, if it is labelled ad, it will link a complex node with a basic node, while if it is labeled ea, the source node needs to be a complex node, and (c) all the constraints are correctly applied, that is, a level constraint can be applied only on an ad edge, while an adhesion constraint can be applied only on an ea edge. Using the above described concepts, the subtree concept (subitem) is defined based on the definition of the subtree relationship. A tree T with root r is a subtree of the tree T’ with root r’ (noted T ≤tree T’) if and only if there is a node n’ in T’ such that r is a part of n’ (noted r ≤node n’). We refer the reader to the original paper (Feng et al., 2003) for more details and explanations on these concepts. Finally, the association rule is defined as an implication T1→T2 that satisfies two conditions. 1. )>> X ⊂ T, Y ⊂ T and X ∩ Y = ∅, where T is the set of tree-structured items and
Mining Association Rules from XML Documents
2. )>> For any Tm and Tn ∈ (X ∪ Y), there is no tree Tp that can satisfy the conditions Tp ≤tree Tm and Tp ≤tree Tn. An example of the association rule in terms of tree-structured items (named XML-enabled association rules by the authors) is presented in Figure 8. The rule exemplified in Figure 8 tells that if a male person orders a CD with the title Star War Game, he will also order two books, that is, Star War I and Star War II, in this order. Though an algorithm to implement the above described framework is still under development, the obtained association rules are powerful as they address the specific format of the XML documents; the associated items are hierarchical structures, not simple nodes. Furthermore, they carry the notion of order, as exemplified by rule in Figure 8.
Summary of Association Rule Mining Techniques for Static XML Documents To conclude this section, we make some comments on the major differences between the above discussed XML association rule techniques and the degree of the possible generalization of them, considering both the number of XML documents mined at once and the structure of these documents, together with some experimental results of the authors. The main difference between the Apriori-based approaches and the non-Apriori-based framework
presented in this chapter consists of the way they perceive the notion of item, which they consider in their mining algorithms. While the former ones extract the items to be mined as a list of nodes by querying the XML document for a specific path, for the last one, each subtree (substructure) in the XML document tree representation is an item and the framework actually looks to discover association rules between the substructures of the document. Another significant difference resides in the number of XML documents allowed by the algorithms and the degree of the complexity of the documents (levels of nesting). Sometimes we may want to find association rules from a single XML document (e.g., books in a library) or from two or more XML documents (e.g., documents containing books in a library, one containing personal details of the authors and the third containing sales of the books for a period of time). If we have a collection of XML documents, it is probable that we will get more interesting information by analysing all the documents together instead of one at a time. The simple (one-step) XML association rule mining techniques (Wan & Dobbie, 2003, 2004) are considering one single document, with a simple structure (see Figure 2a), for example, an XML document containing transactions in a superstore, with the corresponding purchased items. The authors state that their proposed algorithm “works with any XML document, as long as the structure of it is known in advance” (p. 94), but they consider that applying their algorithm to an XML document with a more complex structure
Figure 8. An example of the XML-enabled association rule (Feng et al., 2003)
889
Mining Association Rules from XML Documents
is still an open issue from the performance point of view. The three-step approaches (Braga et al., 2003; Braga, Campi, Ceri et al., 2002; Braga, Campi, Klemettinen et al., 2002) are designed to work with more complex-structured XML documents (see the example in Figure 3 with five levels of nesting). Still, the structure of the document needs to be known in advance as the context, body, and head of the association rules should be defined at the beginning of the algorithm. The authors acknowledge that, even if the experiments were done without considering efficiency as a main concern, the results proved excellent performance when using the Xalan (2005). Also, the experimental results showed that only a small percentage of time was spent for preprocessing and postprocessing the XML document, while the actual mining was the slowest phase. The authors reckon that any future step in the XQuery development to allow more complex conditions in filtering XML documents will determine a substantial improvement of the mining step’s efficiency and speed.
Discovering Association Rules from Dynamic XML Documents As specified in the background section, this section details some of the work done for dynamic XML document versioning and mining. A dynamic XML document is one that is continually changing its content and/or structure in time depending on the data requested to be stored at a certain moment. An example could be the content of an online bookstore, where any change in the number of existing books, their prices, and/or availability will affect the content of the XML document that stores this information. The possible user (e.g., the online store manager) might decide to store each new version of the XML document, which results after each change, so he or she would be able to refer to the history of the store’s content at any
890
time in the future for business purposes. In this case, a high degree of redundancy might appear, and the user will end up with a large collection of XML documents in which a large amount of information is repeated. The issue for researchers was how to efficiently store all these versions so the user will be able to get a historic version of the document with as less redundancy of information as possible. Moreover, a new question was raised about what kind of knowledge can be discovered from the multiple versions of an XML document; the goal in the case of mining dynamic XML documents would be to find a different type of knowledge than can be obtained from snapshots of data. For example, some parts of the XML document representing the online store could change more often, and some other parts could change together; for instance, deletions could appear more often than updates, and so on. All this information could be usefully utilised by the end user in making business decisions related to the online store’s content. In this section, we will first refer to the work done for versioning XML documents, that is, methodologies that efficiently store the changing XML documents in a way that allows the fast retrieval of the historic versions. They will include our own proposed solution to the issue of versioning dynamic XML documents to collect all the changes between versions in a single XML document, named consolidated delta. Finally, we will describe our proposed solution for mining association rules from changes supported by the dynamic XML documents. Most of the methodologies addressing the issue of versioning XML documents are based on the concept of the delta document (Cobena, Abiteboul, & Marian, 2005; Marian, Abiteboul, Cobena, & Mignet, 2001). This is calculated and built by comparing two consecutive versions of the XML document and recording the changes that have been taking place. XML versioning techniques come to solve two main issues (Zhao, 2004), as follows:
Mining Association Rules from XML Documents
1. )>> The querying time can be improved by limiting the amount of data that need to be queried if the result of the same query in the previous state of the document is already known. 2. )>> Storing historical structural deltas (the actual changes) of the XML documents can help to find knowledge (e.g., association rules) not just for snapshot data (as in mining static XML documents), but also considering their evolution in time. For a better understanding of the differences between the XML versioning techniques, we will exemplify them on two versions of an XML document (catalog.xml), which contains data about some products in an online store (Figure 9). A change-centric management of versions in an XML warehouse was first introduced by Marian et al. (2001). They consider a sequence of snapshots of XML documents and, for each pair of consecutive versions, the algorithm calculates a delta document as the difference between them. Delta ∆i is a sequence of update, delete, and insert
operations capable to transform the initial version of the document (Di) into the final version (Di+1). Furthermore, based on the observation that the delta ∆i is not enough to transform Di+1 back into Di, the authors introduce the notion of completed delta. This is a delta that contains more information and works both forward and backward, being able to obtain Di or Di+1 when the other version is available. In our working example (Figure 9), the forward, backward, and completed deltas are shown in Figure 10. In the example in Figure 10, T1 is the tree rooted at node 11, that is, the node, while T2 is the tree rooted at node 3, the node. These two trees will be included in the completed delta XML document. In the delete and insert sequences, the first parameters are the parent node, the second parameters are the affected node positions, and the third parameters are the trees rooted at the affected nodes. In the update sequences, the first parameters are the affected nodes, the second ones are the new values, while the third parameters are the old values.
Figure 9. Two consecutive versions of the same XML document, catalog.xml, with corresponding IDs, in both XML document format and trees
891
Mining Association Rules from XML Documents
Figure 10. Examples of forward, backward, and completed deltas
In this approach (Marian et al., 2001), a presumptive XML warehouse will need to store the initial version of an XML document together with all the completed deltas calculated in time so the model will be able to successfully solve different versioning requests. At the same time, the authors acknowledge that one of the most important issues in their approach is the storage of the redundant information (e.g., both the old version and new version of elements consecutively updated will be stored in the completed deltas). Another change detection algorithm, X-Diff, was proposed by Wang, DeWitt, and Cai (2003), focusing on unordered XML document trees (where the left-to-right order among siblings is not important). They argue that an unordered tree model is more appropriate for most applications than the ordered model (where both the ancestordescendant and the left-to-right order among siblings are important) and propose a methodology that detects changes in XML documents by integrating specific XML structure characteristics with standard tree-to-tree correction techniques. We do not detail here the X-Diff algorithm, but mainly, it performs the followings steps to determine the minimum-cost edit sequence that is able to transform document D1 into document D2. (a) It parses the D1 and D2 documents and builds the associated T1 and T2 trees while at the same time, it computes an XHash value for every node used to represent the entire subtree rooted at the node. (b) It compares the XHash values for the roots and decides if the trees are equivalent (when the XHash values are equal); otherwise, it calculates min(T1, T2) as a minimum-cost matching between trees. (c) It determines the generated minimum-cost edit script E based on the min (T1,T2) found at step b.
892
For our working example, the minimum edit script generated by X-Diff would be:E= {delete (3), update (5, “Available”), update (7,300), insert (4, (Price, 160)}. As it can be noticed, the insert operation does not include the position of the new inserted node because the X-Diff technique is focused on the unordered XML trees, and the position of the node is not considered important for the algorithm. A novel way of storing changes in time with less overhead was proposed by Rusu, Rahayu, and Taniar (2005b). In this approach, earlier versions of the documents can be easily queried and the degree of redundancy is very small. Our algorithm replaces the way of storing differences between two versions of an XML document in deltas and keeping all the deltas in the warehouse with a new concept of consolidated delta, in which changes between versions are recorded in a new XML document, modified any time a new version appears. The main idea is to build a single (consolidated) XML delta document containing all the changes supported by the versioned XML document in the T1–Tn period of time by introducing a new temporal element (namely, ) to store the changes at each time stamp for each altered element. Each element has two attributes: time to store the time stamp and delta to store the type of change (delta can take one of the values inserted, modified, or deleted). To exemplify the consolidated delta approach, Figure 11 shows another set of changes that have been applied to the document in Figure 9. The changes between Version 1 and Version 2 are recorded in the first consolidated delta (left), which is built starting from the initial version (Version 1), adding the elements as explained before.
Mining Association Rules from XML Documents
Similarly, after another set of changes happen at time T3 (Version 3), new elements are added and the consolidated delta is updated to reflect these (right). Every time the consolidated delta is modified to reflect new changes, there are rules to be observed in order to increase the efficiency of the algorithm and eliminate the redundancy as much as possible; we list them here, as follows. (a) If all the children are unchanged, the parent is unchanged. If a parent is unchanged at the time Ti, its children are not marked (stamped) for that particular time stamp; they will be easily rebuilt from the existing previous versions of their parents. (b) If any of the children are modified, deleted, or inserted, the parent is modified. If a parent is modified at the time Ti, all its children will be stamped, each with their own status, that is, modified, inserted, deleted, or unchanged. (c)
If a parent is deleted at the time Ti, all its children will be deleted so they will not appear in the consolidated delta for that particular time stamp or for any time stamp after that. To get a high speed in building the consolidated delta, we assign unique identifiers to elements in the initial XML document and store the maximum ID value. When new elements are inserted in a following version, they will receive IDs based on the existing maximum ID so at any time, one element will be uniquely identified and we will be able to track its changes. The two big advantages of the consolidated approach are the following: (a) There is a very small degree of redundancy of the stored data as unchanged data between versions will not be repeated, and (b) it is enough for the user to store the calculated consolidated delta to be able
Figure 11. Example of the consolidated delta after two series of changes applied to the initial XML document catalog.xml
893
Mining Association Rules from XML Documents
to get an earlier version of the document at any time. We have tested the algorithm of building the consolidated delta and it has excellent results for various dimensions of XML documents.
Versioning Dynamic XML Documents Using the Consolidated Delta Approach The consolidated delta is a very efficient tool when the user wants to retrieve an old version of the document. Suppose the latest version of the document is at the moment Tn in time (see Figure 12), and the user wants to determine the effective look (structure and content) for the XML document at a moment Ti, where i> Interesting knowledge (in our case, association rules) that can be found in the collection of historic versions of the document(s) b. )>> Association rules extracted from the actual changes between versions, that is, from the differences recorded in delta documents There was some work done to discover frequently changing structures in versions of XML documents (Chen et al., 2004; Zhao, Bhowmick, Mohania, & Kambayashi, 2004; Zhao, Bhowmick, & Mandria, 2004) applicable more to discovering the first type of knowledge (Case a above). We do not detail them here; instead, we will propose a novel method of mining changes extracted from dynamic XML documents (applicable for the second type of knowledge, Case b above) by using the consolidated delta described earlier in the previous subsection. Mainly, mining is done by extracting the set of changes for each time Ti (2
static XML documents need to know from the beginning which are the specific areas they need to look at to find either the antecedent or the consequent of the association rule. In this context, future work is needed to improve the existing methodologies in terms of generalization (Buchner, Baumgarten, Mulvenna, Bohm, & Anand, 2000; Garofalakis, Rastogi, Seshadri, & Shim, 1999). Finding algorithms with a high degree of generalization is imperative as scalability is a priority for the current and future XML-driven applications. Mining association rules from dynamic XML documents (i.e., documents that change their content in time to allow different formats of data): Dynamic mining is still a very young area in which a lot of research has been undertaken. From our perspective, intense activity in this field will be noticed soon as Web applications are used on a large scale and manipulate dynamic data. Besides association rules, researchers are looking to find other types of patterns in dynamic XML documents, that is, structural changes from an XML document version to another, and content changes. Our next research work is to implement and evaluate a mining algorithm able to discover association rules and other types of knowledge from the sequence of actual changes of dynamic XML documents. The outcome of this work will be very useful in finding not only what the patterns are in the changing documents, but also how they relate to one another and how they could affect the future behaviour of the initial XML document.
Conclusion This chapter is a systematic analysis of some of the existing techniques for mining association rules
Mining Association Rules from XML Documents
out of XML documents in the context of rapid changes and discoveries in the Web knowledge area. The XML format is more and more used to store data that now exist in the traditional relational-database format, and also to exchange them between various applications over the Internet. In this context, we presented the latest discoveries in the area of mining association rules from XML documents, both static and dynamic, in a well-structured manner, with examples and explanations so the reader will be able to easily identify the appropriate technique for his or her needs and replicate the algorithm in a development environment. At the same time, we have included in this chapter only the research work with a high level of usability in which concepts and models are easy to be applied in real situations without imposing knowledge of any high-level mathematics concepts. The overall conclusion is that this chapter is a well-structured tool very useful for understanding the concepts behind discovering association rules out of collections of XML documents. It is addressed not only to the students and other academics studying the mining area, but to the real end users as a guide in creating powerful XML mining applications.
References Agrawal, R., Imielinski, T., & Swami, A. N. (1993). Mining association rules between sets of items in large databases. Proceedings of the ACM International Conference on Management of Data (SIGMOD 1993) (pp. 207-216). Agrawal, R., & Srikant, R. (1998). Fast algorithms for mining association rules. In Readings in database systems (3rd ed., pp. 580-592). San Francisco: Morgan Kaufmann Publishers Inc.
Ashrafi, M. Z., Taniar, D., & Smith, K. (2005). An efficient compression technique for frequent itemset generation in association rule mining. In Proceedings of International Conference in Advances in Knowledge Discovery and Data Mining (PAKDD 2005) (LNCS 3518, pp. 125-135). Heidelberg, Germany: Springer-Verlag. Ashrafi, M. Z., Taniar, D., & Smith, K. A. (2004a). A new approach of eliminating redundant association rules. In Database and expert systems applications (LNCS 3180, pp. 465-474). Heidelberg, Germany: Springer-Verlag. Ashrafi, M. Z., Taniar, D. & Smith, K. A. (2004b). ODAM: An optimized distributed association rule mining algorithm. IEEE Distributed Systems Online, 5(3). Braga, D., Campi, A., & Ceri, S. (2003). Discovering interesting information in XML with association rules. Proceedings of 2003 ACM Symposium on Applied Computing (SAC’03) (pp. 450-454). Braga, D., Campi, A., Ceri, S., Klemettinen, M., & Lanzi, P. L. (2002). A tool for extracting XML association rules. Proceedings of the 14th International Conference on Tools with Artificial Intelligence (ICTAI ’02) (p. 57). Braga, D., Campi, A., Klemettinen, M., & Lanzi, P. L. (2002). Mining association rules from XML data. In Proceedings of International Conference on Data Warehousing and Knowledge Discovery (DaWak 2002) (LNCS 2454, pp. 21-30). Heidelberg, Germany: Springer-Verlag. Buchner, A. G., Baumgarten, M., Mulvenna, M. D., Bohm, R., & Anand, S. S. (2000). Data mining and XML: Current and future issues. Proceedings of 1st International Conference on Web Information System Engineering (WISE 2000) (pp. 127-131).
897
Mining Association Rules from XML Documents
Chen, L., Browmick, S. S., & Chia, L. T. (2004). Mining association rules from structural deltas of historical XML documents. In Proceedings of International Conference in Advances in Knowledge Discovery and Data Mining (PAKDD 2004) (LNCS 3056, pp. 452-457). Heidelberg, Germany: Springer-Verlag. Cobena, G., Abiteboul, S., & Marian, A. (2005). XyDiff tools: Detecting changes in XML documents. Retrieved February 2006, from http://www. rocq.inria.fr/gemo Daly, O., & Taniar, D. (2004). Exception rules mining based on negative association rules. In Computational science and applications (LNCS 3046, pp. 543-552). Heidelberg, Germany: Springer-Verlag. Ding, O., Ricords, K., & Lumpkin, J. (2003). Deriving general association rules from XML data. Proceedings of the ACIS 4th International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD’03) (pp. 348-352). Feng, L., Dillon, T., Wiegand, H., & Chang, E. (2003). An XML-enabled association rules framework. In Proceedings of International Conference on Database and Expert Systems Applications (DEXA 2003) (LNCS 2736, pp. 88-97). Heidelberg, Germany: Springer-Verlag. Garofalakis, M. N., Rastogi, R., Seshadri, S., & Shim, K. (1999). Data mining and the Web: Past, present and future. Proceedings of the 2nd Workshop on Web Information and Data Management (WIDM 1999) (pp. 43-47). Laurent, M., Denilson, B., & Pierangelo, V. (2003). The XML Web: A first study. Proceedings of the International WWW Conference (pp. 500-510). Marian, A., Abiteboul, S., Cobena, G., & Mignet, L. (2001). Change-centric management of versions in an XML warehouse. The VLDB Journal, 581–590.
898
Nayak, R. (2005). Discovering knowledge from XML documents. In J. Wong (Ed.), Encyclopedia of data warehousing and mining (pp. 372-376). Hershey, PA: Idea Group Reference. Nayak, R., Witt, R., & Tonev, A. (2002). Data mining and XML documents. Proceedings of the 2002 International Conference on Internet Computing (pp. 660-666). Rusu, L. I., Rahayu, W., & Taniar, D. (2005a). Maintaining versions of dynamic XML documents. In Proceedings of the 6th International Conference on Web Information System Engineering (WISE 2005) (LNCS 3806, pp. 536-543). Heidelberg, Germany: Springer-Verlag. Rusu, L. I., Rahayu, W., & Taniar, D. (2005b). A methodology for building XML data warehouses. International Journal of Data Warehousing and Mining, 1(2), 67–92. Tjioe, H. C., & Taniar, D. (2005). Mining association rules in data warehouses. International Journal of Data Warehousing and Mining, 1(3), 28–62. Wan, J. W., & Dobbie, G. (2003). Extracting association rules from XML documents using XQuery. Proceedings of the 5th ACM International Workshop on Web Information and Data Management (WIDM’03) (pp. 94-97). Wan, J. W., & Dobbie, G. (2004). Mining association rules from XML data using XQuery. Proceedings of International Conference on Research and Practice in Information Technology (CRPIT 2004) (pp. 169-174). Wang, Y., DeWitt, D. J., & Cai, J. Y. (2003). X-Diff: An effective change detection algorithm for XML documents. Proceedings of the 19th International Conference on Data Engineering (ICDE 2003) (pp. 519-530). World Wide Web Consortium. (W3C). (n.d.). Retrieved February 2006, from http://www.w3c.org
Mining Association Rules from XML Documents
Xalan. (2005). The Apache Software Foundation: Apache XML project. Retrieved December 2005, from http://xml.apache.org/xalan-j/ XQuery. (2005). Retrieved February 2006, from http://www.w3.org/TR/2005/WD-xquery-20050915/ Zhao, Q., Bhowmick, S. S., & Mandria, S. (2004). Discovering pattern-based dynamic structures from versions of unordered XML documents. In Proceedings of International Conference on Data Warehousing and Knowledge Discovery (DaWaK 2004) (LNCS 3181, pp. 77-86). Heidelberg, Germany: Springer-Verlag.
Zhao, Q., Bhowmick, S. S., Mohania, M., & Kambayashi, Y. (2004). Discovering frequently changing structures from historical structural deltas of unordered XML. Proceedings of ACM International Conference on Information and Knowledge Management (CIKM’04) (pp. 188197). Heidelberg, Germany: Springer Berlin.
This work was previously published in Services and Business Computing Solutions with XML: Applications for Quality Management and Best Processes, edited by Patrick Hung, pp. 176-196, copyright 2009 by Business Science Reference (an imprint of IGI Global).
899
Section IV
Utilization and Application
This section introduces and discusses the utilization and application of enterprise information systems around the world. These particular selections highlight, among other topics, enterprise information systems in multiple countries, data mining applications, and critical success factors of enterprise information systems implementation. Contributions included in this section provide excellent coverage of the impact of enterprise information systems on the fabric of our present-day global village.
901
Chapter 4.1
QoS-Oriented Grid-Enabled Data Warehouses Rogério Luís de Carvalho Costa University of Coimbra, Portugal Pedro Furtado University of Coimbra, Portugal
AbstrAct Globally accessible data warehouses are useful in many commercial and scientific organizations. For instance, research centers can be put together through a grid infrastructure in order to form a large virtual organization with a huge virtual data warehouse, which should be transparently and efficiently queried by grid participants. As it is frequent in the grid environment, in the Gridbased Data Warehouse one can both have resource constraints and establish Service Level Objectives (SLOs), providing some Quality of Service (QoS) differentiation for each group of users, participant organizations or requested operations. In this work, we discuss query scheduling and data placement in the grid-based data warehouse, proposing the use of QoS-aware strategies. There are some works on parallel and distributed data warehouses, but most DOI: 10.4018/978-1-60566-756-0.ch009
do not concern the grid environment and those which do so, use best-effort oriented strategies. Our experimental results show the importance and effectiveness of proposed strategies.
INtrODUctION In the last few years, Grid technology became a key component in many widely distributed applications from distinct domains, which include both research-oriented and business-related projects. The Grid is used as an underlying infrastructure that provides transparent access to shared and distributed resources, like supercomputers, workstation clusters, storage systems and networks (Foster, 2001). In Data Grids, the infrastructure is used to coordinate the storage of huge volumes of data or the distributed execution of jobs which consume or generate large volumes of data (Krauter et al,
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
QoS-Oriented Grid-Enabled Data Warehouses
2002; Venugopal et al, 2006). Most of the works on data grids considers the use or management of large files, but grid-enabled Database Management Systems (DBMS) may be highly useful in several applications from distinct domains (NietoSantisteban et al, 2005; Watson, 2001). On the other hand, data warehouses are mostly read-only databases which store historical data that is commonly used for decision support and knowledge discovery (Chaudhuri & Dayal, 1997). Grid-based data warehouses are useful in many real and virtual global organizations which are generating huge volumes of distributed data. In such context, the data warehouse is a highly distributed database whose data may be loaded from distinct sites and that should be transparently queried by users from distinct domains. But constructing effective grid-based applications is not simple. Grids are usually very heterogeneous environments composed by resources that may belong to distinct organization domains. Each domain administrator may have a certain degree of autonomy and impose local resource usage constraints for remote users (Foster, 2001). Such site autonomy is reflected in terms of scheduling algorithms and scheduler architectures. The hierarchical architecture is one of the most commonly used scheduling architecture in Grids (Krauter et al, 2002). In such architecture, a Community Scheduler (or Resource Broker) is responsible to transform submitted jobs into tasks and to assign them to sites for execution. At each site, a Local Scheduler is used to manage local queues and implement local domain scheduling policies. Such architecture enables a certain degree of site autonomy. Besides that, in Grids, tasks are usually specified together with Service Level Objectives (SLO) or Quality-of-Service (QoS) requirements. In fact, in many Grid systems, scheduling is QoS-oriented instead of performance-oriented (Roy & Sander, 2004). In such situations, the main objective is to increase user’s satisfaction instead of achieving high performance. Hence, the user-specified
902
SLOs may be used by the Community Scheduler to negotiate with Local Schedulers the establishment of Service Level Agreements (SLA). But SLOs can also be used to provide some kind of differentiation among users or jobs. Execution deadline and execution cost’s limit are some example of commonly used SLOs. We consider here the use of deadline-marked queries in grid-based Data Warehouses. In such context, execution time objectives can provide some differentiation between interactive queries and report queries. For example, one can establish that interactive queries should be executed by a 20 seconds deadline and that report queries should be executed in 5 minutes. In fact, different deadlines may be specified considering several alternatives, like the creation of privileged groups of users that should obtain responses in lower times or like providing smaller deadlines for queries submitted by users affiliated to institutions that had offered more resources to the considered grid-based data warehouse. Data placement is a key issue in grid-based applications. Due to the grid’s heterogeneity and to the high cost of moving data across different sites, data replication is commonly used to improve performance and availability (Ranganathan & Foster, 2004). But most of the works on replica selection and creation in data grids consider generic file replication [e.g. (Lin et al, 2006; Siva Sathya et al, 2006; Haddad & Slimani, 2007)]. Therefore, the use of specialized data placement strategies for the deployment of data warehouses in grids still remains an open issue. In this chapter, we discuss the implementation of QoS-oriented Grid-enabled Data Warehouses. The grid-enabled DW is composed by a set of grid-enabled database management systems, a set of tools provided by an underlying grid resource management (GRM) system and hierarchical schedulers. We combine data partitioning and replication, constructing a highly distributed database that is stored across grid’s sites, and use a QoS-oriented scheduling and a specialized replica
QoS-Oriented Grid-Enabled Data Warehouses
selection and placement strategy to achieve high QoS levels. This chapter is organized as follows: in the next Section we present some background on data grids and grid-enabled databases. Then, we discuss QoS-oriented scheduling and placement strategies for the Grid-based warehouse. In the following, we present some experimental results. Next, we draw conclusions. At the end of the chapter, we present some key terms definitions.
DAtA GrIDs AND GrIDENAbLED DAtAbAsEs The Grid is an infra-structure that provides transparent access to distributed heterogeneous shared resources, which belong to distinct sites (that may belong to distinct real organizations). Each site has some degree of autonomy and may impose resource usage restrictions for remote users (Foster, 2001). In the last decade, some Grid Resource Management (GRM) Systems [for example, Legion (Grimshaw et al, 1997) and Globus Toolkit (Foster & Kesselman, 1997)] were developed in order to provide some basic functionality that is commonly necessary to run grid-based applications. Authorization and remote job execution management are among the most common features in GRM systems. Some of them also provide data management-related mechanisms, like efficient data movement [e.g. GridFTP (Allcock et al, 2005)] and data replica location [e.g. Globus Replica Location Service – RLS (Chervenak et al, 2004)]. In terms of grid job scheduling, there are three basic architectures (Krauter et al (2002): centralized, hierarchical and decentralized. In the first one, a single Central Scheduler is used to schedule the execution of all the incoming jobs, assigning them directly to the existent resources. Such architecture may lead to good scheduling decisions, as the scheduler may consider the characteristics and loads of all available resources, but suffers
from a scalability problem: if a wide variety of distributed heterogeneous resources is available, considering all the resources’ individual characteristics when scheduling job execution may become very time consuming. In the hierarchical architecture, a Community Scheduler (or Resource Broker) is responsible to assign job execution to sites. Each site has its own job scheduler (Local Scheduler) which is responsible to locally schedule the job execution. The Community Scheduler and Local Schedulers may negotiate job execution and each Local Scheduler may implement local resource utilization policies. Besides that, as the Community Scheduler does not have to exactly know the workload and characteristics of each available node, this model leads to greater scalability than the centralized scheduling model. In the Decentralized model, there is no Central Scheduler. Each site has its own scheduler, which is responsible to schedule local job execution. Schedulers must interact to each other in order to negotiate remote job execution. Several messages may be necessary during the negotiation in order to do good job scheduling, which may impact the system’s performance. Some of the GRM systems have built-in scheduling policies, but almost all enable the user to implement its own scheduling policy or to use application-level schedulers. In this context, some general purpose application level schedulers were designed [e.g. Condor-G (Frey et al, 2001) and Nimrod-G (Buyya et al, 2000)]. These general purpose generally consider some kind of user-specified requirement or QoS-parameter (e.g. job’s deadline), but may fail to efficiently schedule data-bound jobs. Query scheduling strategies for data-bound jobs were evaluated by Ranganathan & Foster (2004). Data Present (DP), Least Loaded Scheduling (LLS) and Random Scheduling (RS) were compared. In RS, job execution is randomly scheduled to available nodes. In LLS, each job is scheduled to be executed by the node that has the lowest number of waiting jobs. Both in RS
903
QoS-Oriented Grid-Enabled Data Warehouses
and LLS, a data-centric job may be scheduled to be executed by a job that does not store the required data to execute such job. In this case, remote data is fetched during job execution. In the DP strategy, each job is assigned to a node that stores the job’s required input data. Ranganathan & Foster claim that, in most situations, DP has better performance than LLS and RS (as doing data movement across grid’s nodes may be very time consuming). There are several parameters that should be considered when scheduling data-centric jobs. These include the size of the job’s input and output data, and the network bandwidth among grid’s nodes. Park & Kim (2003) present a cost model that use such parameters to estimate job’s execution time at each node (both considering that a job can be executed at the submission site or not, and that it may use local or remote data as input). Job execution is scheduled to the node with the lowest predicted execution time. Although very promising, the grid-enabled database management systems were not largely adopted for a long time (Nieto-Santisteban et al, 2005; Watson, 2001). Watson (2001) proposed the construction of a federated system with the use of ODBC/JDBC as interface for heterogeneous database systems. In more recent work, web services are used as interface to database management systems. Alpdemir et al (2003) present an Open Grid Services Architecture [OGSA – (Foster et al, 2002)]-compatible implementation of a distributed query processor (Polar*). A distributed query execution plan is constructed by basic operations that are executed at several nodes. Costa & Furtado (2008c) compares the use of centralized and hierarchical query scheduling strategies in grid-enabled databases. The authors present that hierarchical schedulers can be used without significant lose in the system’s performance and can also lead to good levels of achievement of Service Level Objectives (SLOs). In Costa & Furtado (2008b) the authors propose the use of reputation systems to schedule deadline-
904
marked queries among grid-enabled databases when several replicas of the same data are present at distinct sites. In Grids, data replicas are commonly used to improve job (or query) execution performance and data availability. Best Client and Cascading Replication are among the dynamic file replication strategies evaluated by Ranganathan & Foster (2001) to be used in the Grid. In both models, a new file replica is created whenever the number of access to an existent data file is greater than a threshold value. The difference among the methods resides on where such new file is placed. The ‘best client’ of a certain data file is defined as the node that has requested for each more times in a certain time period. In the Best Client placement strategy, the new replica is placed at the best client node. In the Cascading Replication method, the new file is placed at the first node in the path between the node that stores the file that is being replicated and the best client node. The Best Client strategy is used as an inspiration for the Best Replica Site strategy [(Siva Sathya et al, 2006)]. The main different among the this strategy and the original Best Client is that in Best Replica Site the site in which the replica is created is chosen considering not only the number of access from clients to the dataset, but also the replica’s expected utility for each site and the distance between sites. Sathya et al (2006) also propose two other strategies: Cost Effective Replication and Topology Based Replication. In the first one, a cost function is used to choose in which site a replica should be created (the cost function evaluates the cost of accessing a replica at each site). In the latter, database replicas are created at the node that has the greatest number of direct connections to other ones. Topology related aspects are also considered by Lin et al (2006) in order to choose replica location. The authors consider a hierarchical (tree-like) grid in which database is placed at the tree root. Whenever a job is submitted, the scheduler looks for the accessed data at the node in which the job
QoS-Oriented Grid-Enabled Data Warehouses
was submitted. If the necessary data is not at such node, then the schedulers asks for it at the node’s parent node. If the parent node does not have a replica of the searched data, then the scheduler looks for it at the grandparent node, and so on. Whenever the number of searched nodes is greater than a defined value, a new data replica is created. Such newly created replica is placed at the node that maximizes the number of queries that can be answered without creating new replicas. Maximizing the economic value of locally stored data is the objective of the strategy proposed by Haddad & Slimani (2007). In such strategy, there is a price to access each data fragment. Each node tries to foresee the future price of the fragments and stores the ones that are forecasted as the most valuable. Most of the abovementioned strategies are oriented for file-based grids. Others are related to best-effort oriented scheduling in grid-enabled databases. But all of them are somehow related to the aspects we deal with in the next Sections. In the next Section, we discuss the architecture and scheduling for the QoS-oriented grid-based distributed data warehouse.
tHE DIstrIbUtED QOsOrIENtED WArEHOUsE Data warehouses are huge repositories of historical data. They are subject-oriented: a certain subject (revenue, for example) is analyzed considering several distinct measures (e.g. time period). Each analyses measure domain is called a dimension. Therefore, the DW is a multidimensional space. Such space is commonly represented in relational databases as star schemas (Chaudhuri & Dayal, 1997), with several distinct dimension tables and a huge facts table. The dimensions tables store information about the analyze measures domains. The facts table stores data that represents the events of the real world and pointers to dimensions tables. Figure 1 presents an example of a star schema (in the remaining of this chapter, tables are considered to be conceptually organized in a star schema). The Grid-based data warehouse is accessed by users of geographically distributed sites which may or may not belong to the same real organization, but that are put together within a grid infrastructure. Each site may share one or more resources to the grid. Examples of possible shared
Figure 1. Sample Star Schema
905
QoS-Oriented Grid-Enabled Data Warehouses
resources are storage systems, computer clusters and supercomputers. Data warehouses are usually deployed at a single site. But that may not be the most effective layout in a grid-based DW implementation. In fact, in such environment, placing the entire database at a single site would be more expensive and time consuming than creating a distributed DW that uses the available distributed resources to store the database and to execute users’ queries. It is important to consider that not only users are distributed across distinct grid sites but also that the warehouse’s data may be loaded from several sites. Hence, in the distributed Grid-based DW, data is partitioned and/or replicated at nodes from distinct sites and may be queried by any grid participant.
best-Effort Approaches for Grid-Enabled Warehouses There are some previous works on implementing and using grid-enabled data warehouses, but most use best-effort oriented approaches, which may not be the most adequate approach in grid based systems (as presented in the previous Section, grid scheduling is usually satisfaction-oriented). High availability and high performance are the main concerns by Costa & Furtado (2006). Each participating site stores a partitioned copy of the entire warehouse. Intra-site parallelism is obtained by the use of the Node Partitioned Data Warehouse (NPDW) strategy (Furtado, 2004). Hierarchical scheduler architecture is used together with an on-demand scheduling policy (idle nodes asks the Central Scheduler for new queries to execute). Such model leads to good performance and high availability, but also consumes too much storage space, as the whole warehouse is present at each participating site. The Olap-enabled grid (Lawrence & RauChaplin, 2006; Dehne et al, 2007) is a two tier grid-enabled warehouse. The users’ local domain
906
is considered the first tier and stores cached data. Database servers at remote sites compose the second tier. The scheduling algorithm tries to use the locally stored data to answer submitted queries. If it is not possible, then remote servers are accessed. The Globus Toolkit is used by Wehrle et al (2007) as an underlying infrastructure to implement a grid-enabled warehouse. Facts table data is partitioned across nodes participating nodes and dimension data is replicated. Some specialized services are used at each node: (i) an index service provides information about locally stored data; and (ii) a communication service is used to access remote data. Locally stored data is used to answer incoming queries. If the searched data is not stored at the local node, then remote access is done by the use of the communication service. This strategy and the abovementioned Olapenabled strategy do not provide any autonomy for local domains.
Distributed Data Placement in Qos-Oriented DW In data warehouses, users’ queries usually follow some kind of access pattern, like geographically related ones in which users from a location may have more interest in data related to such location than in data about other locations (Deshpande et al, 1998). That may also be applicable for the grid. For instance, consider a global organization that uses a grid-based DW about sales which is accessed by users from several countries. The users in New York City, USA, may start querying data about sales revenue in Manhattan, and then do continuous drill-up operations in order to obtain information about sales in New York City, in New York State and, finally, in the USA. Only rarely New York users would query data about sales in France. In the same way, users from Paris may start querying the database about sales in France, and then start doing drill-down operations in order to obtain data about sales in Paris and, then, individually on each of its arrondissements.
QoS-Oriented Grid-Enabled Data Warehouses
In order to reduce data movement across sites (improving the system’s performance) the gridbased DW tables are physically distributed across different sites. Such distribution is represented at a Global Physical Schema (GPS). Ideally, the physically used data distribution strategy is transparent to users, which should submit queries considering a unified Logical Model (LM). Grids are highly heterogeneous environments. At each site, different types of resources may be available (like shared-nothing and shared-disk parallel machines, for example). It is somewhat difficult to find an intra-site allocation strategy that is optimal in the several possible situations. Therefore, each site may use its own local physical allocation strategy (e.g. Multi-Dimensional Hierarchical Fragmentation – MDHF (Stöhr et al, 2000) or Node-Partitioned Data Warehouse strategy - NPDW (Furtado, 2004). Each site’s existent relations are represented in a Local Site Physical Schema (LSPS). This assumption fits well with the idea of domain autonomy, which is one of the grid’s characteristics. In the generic grid-based DW, nodes from any site can load data to the database. But the same data cannot be loaded from distinct sites. This leads to the idea that each piece of data has a single site (to which we call Data Source Site) that is its primary source. In order to reduce data movement across grid’s sites (considering the abovementioned geographically related access patters), each site should maintain a copy of the facts data it has loaded into the DW (in this chapter, we consider that tables in the LM are organized in a star schema). This generates a globally physically partitioned facts table which uses the values of a site source attribute as partitioning criteria. Depending on the implementation, the site source attribute values may be combined with values of other existent dimensions. In fact, the repartitioning of each facts table site source-based fragment into several smaller fragments can benefit the system in several ways. For instance, in such situation, each smaller fragment can be
replicated to distinct sites, what would increase the system’s degree of parallelism. Besides that, depending on the selection predicate, some queries may access only a set of the smaller fragments, which would be faster than accessing the whole original site source-based fragment. These two situations are represented in Figure 2. Therefore, even at the global level, other partitioning criteria should be used together with the site source attribute. The use of the most frequently used equijoin attributes as part of the partitioning criteria for the facts table can improve performance, by reducing data movement across sites when executing queries [as it does in shared-nothing parallel machines (Furtado, 2004b)]. Besides facts table’s partitions, each site should also store dimension tables’ data. Full replication of dimension tables across all sites may be done to reduce inter-site data movement during query execution and to improve data availability. Such strategy is feasible when dimension tables are small (this also facilitates system management). But when large dimension tables are present, they can be fragmented both at intra-site and inter-sites levels in order to improve performance and QoSlevels. Intra-site dimension table fragmentation strategy depends on the locally chosen physical allocation strategy (which is dependent on the type of locally available resources, as discussed earlier). Inter-sites large dimension tables’ fragmentation should be done using a strategy similar to the one of facts table fragmentation: initially, dimension data should remain at its Data Source Site. Inter-site replication is done when necessary. Derived partitioning of the facts table can also be done, improving the system’s performance as join operations can be broken into subjoins that are executed in parallel at distinct sites. Although the use of facts table derived partitioning depends on the semantics of stored data, such kind of partitioning should be used together use the aforementioned partitioning based on the site source attribute. In the case of large dimension tables’ fragmentation, some data replication may also oc-
907
QoS-Oriented Grid-Enabled Data Warehouses
Figure 2. Examples of benefits on the use of smaller facts table fragments at the global level
cur. For instance, let’s consider a grid-DW of a nation-wide retail store. There are several sites participating in such grid-DW, each one at a distinct state. In such warehouse, a (large) dimension table stores information about customers. Such table may be fragmented according with the location in which the customer buys. Initially, each customer’s information would be at a single site. But when a certain customer travels (or moves) to another state and buys at stores from such state, his/her information may also appear in the state’s database. When there is dimension table (fragment) replication at distinct sites, a replica consistency strategy may be necessary. There are several works in the literature about algorithms to efficiently maintain replica consistency in distributed and grid-based databases [e.g., (Akal et al, 2005; Breitbart et al, 1999; Chen et al, 2005)]. Hence, in the GPS, the facts table is partitioned by the combination of the site source attribute with the other most frequently used equi-join attributes.
908
Derived facts table partitioning may be used. Each site stores the partitions from which the site is the Data Source Site. Large dimension tables are fragmented and small dimension tables replicated at all sites. A fragmented site source dimension table (each site storing only its own information) should be used. Such data distribution strategy is represented in Figure 3. Facts table’s fragments are replicated across grid’s sites in order to improve performance and availability.
the Qos-Oriented Query scheduling Users submit queries to the grid-based DW considering the Logical Model. Ideally, the physically used data distribution strategy should be transparent to users. Hence, the first phase in query scheduling is transforming the submitted query (job) into other ones (tasks) that access the physically distributed global relations. Such transformation is similar to the ones presented at (Furtado, 2004b).
QoS-Oriented Grid-Enabled Data Warehouses
Figure 3. Global physical allocation example
The generated tasks are assigned to sites by a Community Scheduler. At each site, a Local Scheduler is responsible to manage task execution. If domain specific physical layout is used, the Local Scheduler must also do the conversion of the globally specified task into other queries that access the physically existent relations. The Community Scheduler specifies the necessary requirements (execution deadlines) of each task (rewritten query) in order to execute the user’s query by the desired QoS level. Then, there is a task execution negotiation phase among the Community Scheduler and Local Schedulers, in order to verify if each of the rewritten queries (tasks) can be executed by its deadline. If any task cannot be executed by the specified deadline, then user is notified that the required SLO cannot be achieved and a new one must be specified, or the query execution would be canceled. The Community Scheduler assigns queries to sites considering the Global Physical Model. Hence, if local domain uses some data partitioning policy different from the one used at the global level (e.g. large dimension table partitioning, as in the abovementioned NPDW strategy), then
the Local Scheduler should transform the globally specified query into the ones that should be locally executed. Besides that, results merging should also be done at local site level in order to send back a single result, which is correspondent to the query (task) the site has received.
EVALUAtING IF A sLO cAN bE AcHIEVED Let’s consider a user-submitted query Q with a deadline interval d. The system must estimate the query’s total execution time (tet) and compare it with d in order to verify if the query can or cannot be executed by its deadline. When the system estimates that the execution can be done according to the specified SLO (d ≥ tet), it starts query execution. Otherwise, query execution is not started and user is informed that the established SLO cannot be achieved. In order to predict the tet value, the system estimates the execution time of each of the query’s tasks (task finish time – tft), considering three key time components for each query (task):
909
QoS-Oriented Grid-Enabled Data Warehouses
(i)
The query execution time at a local site (local execution time - let); (ii) The necessary time to transfer required data to the site (data transfer time – dtt); (iii) The necessary time to transfer the query’s results back from the chosen site (results transfer time – rtt).
The tft value of a single task at a certain site is computed by Equation 1. An upper bound estimated value for the users’ query execution (tet) is obtained by Equation 2. tft = let + dtt + rtt
(1)
tet = Max(tft)
(2)
To estimate the value of tft, the Community Scheduler must have some estimative of its components. First of all, it predicts the values of dtt and rtt, with the support of a grid infrastructure network monitor tool [like the Network Weather Service – NWS - (Wolski, 1997)]. Such tool is used to predict network latency (L) and data transfer throughput (TT) between sites. The Community Scheduler uses such predicted values for network characteristics together with estimated dataset sizes (obtained by database statistics) to predict dtt and rtt (a predicted transfer time (tbs) of a dataset of size z between sites i and j can be obtained by Equation 3). z tbsi , j = L + TT
(3)
On the other hand, the Community Scheduler does not have control of intra-set data placement and query execution. Therefore, it is somewhat difficult to make such module estimate tasks’ execution time. Such estimation is done by local schedulers. In fact, in QoS-oriented scheduling, the Community Scheduler does not have to know the exactly necessary time to execute a query: local schedulers must commit themselves to execution the assigned queries by a certain time interval. Such interval is the maximum value that let (mlet) can assume in order to finish the user’s query execution by the specified SLO. Hence, for each task, the Community Scheduler computes the mlet value (Equation 4) and uses such value as a task deadline when negotiating with local schedulers. mlet ≤ d − (dtt + rtt )
Figure 4 presents a general view of the SLOaware scheduling model.
LOcAL scHEDULErs AND sErVIcE LEVEL AGrEEMENts When estimating if a user’s query can be executed by the proposed deadline, the Community Scheduler must consider the necessary time to execute each rewritten query at local sites (let). But not only the Community Scheduler does not have total control of the execution environment, but
Figure 4. General view of the SLO-aware scheduling model
910
(4)
QoS-Oriented Grid-Enabled Data Warehouses
also each site can have local domain policies that can constraint the use of local resources by remote users. Therefore, the necessary time to execute each task should be predicted by local schedulers. But in QoS-oriented scheduling, each site may not inform the Community Scheduler the exact predicted query execution time. On the other hand, local schedulers should commit themselves to execute the negotiated query by a certain deadline (mlet) that is specified by the Community Scheduler. When a local scheduler agrees to execute a query by a certain deadline, it makes a SLA (Service Level Agreement) with the Community Schedule. When a SLA is signed, the local scheduler is committed to execute the query by the negotiated deadline. But it can, for instance, reorder local query execution (do not have to execute each of the incoming queries as fast as possible) or change the number of queries that are concurrently executed at the local DBMS (multi-programming degree). The QoS-OES (Costa & Furtado, 2008) scheduler is an example of QoS-oriented query scheduler that can be used in such context. Such module is a generic external scheduler that is used as a middleware between the DBMS and its users. The QoS-OES is capable to estimate query execution time in a multi-query environment and to commit itself to execute a submitted query by a certain deadline, as soon as the specified deadline time is greater than the predicted query execution time.
caching and replication for Qos Grids are highly heterogeneous environments in which the use of database replicas can lead to great performance improvement and to high QoS levels. But as the problem of choosing the optimal number of database replicas and doing the optimal placement of such replicas into the nodes of a distributed environment is NP-hard (Loukopoulos & Ahmad, 2000), some heuristics should be used. The heuristics used in this Sec-
tion are QoS-oriented, which means they intend to increase the system’s SLO-achievement rate (SLO-AR). The SLO-AR (Costa & Furtado, 2008b) is a performance metric that aims to indicate how well a system is performing on executing jobs by specified service level objectives. It is defined as the relation between the number of queries whose execution finishes by the required deadline (N) and the number of queries in the submitted workload (W). Therefore, the SLO-Oriented replica selection and placement strategy aims at increasing the number of queries that the system executes by the specified SLOs. In order to do that, a benefitbased mechanism is implemented by a Replication Manager (RM), as it is described below.
INtEr-sItE QOs-OrIENtED DyNAMIc rEPLIcAtION The RM monitors the number of times that a SLO-objective cannot be achieved due to the inexistence of a certain dataset (e.g. facts table fragment) at a certain site and computes the total benefit value (β) that the system would have if a replica of such dataset is created. When β is greater than a threshold value, the system considers the possibility of creating the data replica at the evaluated site (at this point we refer that replicas of input datasets are created. Latter in this chapter, we discuss how to determine if such datasets are fact’s table fragments or computed chunks). In order to implement such policy, when the Local Scheduler cannot execute the task (query) by the specified mlet time, it should evaluate if it would achieve the specified deadline if a replica of a given dataset is present at the site. When the Local Scheduler predicts that it would achieve the required task’s deadline if a certain replica is locally stored, it informs the Replication Manager what is such replica. In such
911
QoS-Oriented Grid-Enabled Data Warehouses
situation, the value of β for the specified dataset is incremented by a certain δ value (benefit of the considered input data set replica to the system), as represented in Equation 5. However, the δ value of each task should vary over time, in order to differentiate the benefit for old queries from the ones for newer queries. Therefore, a time discount function may be used in order to compute δ, as presented in Equation 6. β = ∑ δi i
δi = e
∆t − λ
(5) (6)
In Equation 6, Δt represents the time window between the task execution time (of the query that would be benefited by the input dataset replication) and current time and λ enables the use of different time intervals [as defined in (Huynh et al, 2006)]. Whenever β is greater than a threshold value for a certain dataset/site, the site is marked as a candidate to receive a replica of the considered dataset. Indeed, the replica is immediately created if there is enough disk space. Otherwise, the system would have to evaluate if some of the existent data replicas (of another datasets) should be replaced or not by the new replica candidate. In order to do that, RM also maintains the benefits score of existing dataset replicas. Such score is computed in the same way that β and δ values of inexistent replicas are computed. If the β value of an existing replica is lower than the one of a replica candidate, then a replica replacement is done. Otherwise, the system maintains the already existing replicas.
LOcAL cAcHING AND INtrAsItE rEPLIcA cANDIDAtEs Facts’ table fragments are natural candidates for inter-site replication, as discussed previously in this Chapter. When a Local Scheduler evaluates
912
that a certain deadline would be achieved if its site stores a local copy of a certain fragment, it informs the Replication Manager which considers such fragment as a dataset that is candidate for inter-site replication. Such fragment may be replicated or not depending on its benefit for the system. As discussed before, each site is autonomous to implement its own data placement (and replication) strategy. Besides that, each site may also implement its own data caching mechanism. There are some caching mechanisms that are benefited by the multidimensional nature of warehouse data. Chunked-based caching (Deshpande et al, 1998; Deshpande & Naughton, 2000) is one of those specialized mechanisms for the DW. In chunk-based caching, DW data to be stored in the cache is broken up into chunks that are cached and used to answer incoming queries. The list of necessary chunks to answer a query is broken into two: (i) chunks that may be obtained (or computed) from cached data; and (ii) chunks that have to be recovered from the data warehouse database. In such method, sometimes it is possible to compute a chunk from chunks from different levels of aggregation (each aggregation level corresponds to a group-by operation) (Deshpande & Naughton, 2000). Such computed chunk based mechanism may be implemented by local schedulers to implement local caching. But chunks of results or computed chunks can also be considered as candidates for replication by Replication Manager. In such context, when a local scheduler evaluates that a certain missing chunk would enable the site to execute a task by a certain deadline that would be achieved without such chunk, the local scheduler must send to the Replication Manager the identification of such chunk. Then, the RM would consider the chunk as a dataset candidate for replication in its benefit-based dynamic replica selection and placement mechanism.
QoS-Oriented Grid-Enabled Data Warehouses
EXPErIMENtAL EVALUAtION The QoS-oriented scheduling and dynamic replication mechanisms were experimentally evaluated in a simulation environment. The experimental setup is composed by 11 sites, which were inspired in the experimental testbed used in (Sulistio et al, 2007) and in the LHC Computing Grid Project sites (Bird et al, 2005). Figure 5 presents the main characteristics of the used sites. A Star Schema-based DW is considered. Facts table is partitioned into 121 fragments. Fragments sizes are generated considering Pareto’s power-law distribution [which fits well for data grids files (Sulistio et al, 2007)], with a 1Gb mean fragment size. Initially, each site stores the same number of facts table fragments (11 fragments). Two distinct network topologies are used in our tests (represented in Figure 6). The first one is a hierarchical topology, in which sites are organized in a binary tree according to their ids [hierarchical topologies are also considered in real projects, like the LHC Computing Grid Project - data storage and analysis project for CERN’s Large Hadron Collider (Bird et al, 2005)]. The second network model is a nonhierarchical topology, dual ring
topology. In such topology, there is no central root site (eliminating a possible bottleneck in the system) and each site is directly connected to two other sites. Data movement in the rings is done in opposite directions. In all the tests, we consider a data transfer rate of 50Mbps and a latency of 10 milliseconds. The considered query workload is composed of 1,000 tasks (re-written queries). Tasks’ sizes vary about 2,000 kMIPS ± 30%, which means that a typical task execution would take about 30 minutes in the least powerful site and 40 seconds in the most powerful one. Figure 5 presents the number of queries submitted at each site. In order to model that, we consider that each query may be submitted by a user from any site, but the probability of a query being submitted by a specific site is proportional to the number of DW users at the site. Users’ access patterns were modeled considering that half of the tasks access data stored at the same site that submitted the job. In order to evaluate the effectiveness of the QoS-aware scheduling and dynamic replication strategies, we have made several tests using the QoS-oriented scheduling and three different data dynamic replication strategies: (i) the QoS-aware,
Figure 5. Experimental setup description
913
QoS-Oriented Grid-Enabled Data Warehouses
Figure 6. Experimentally tested network topologies
(ii) Best Client (BC) and (iii) Data Least Loaded (DLL). Variations of the BC strategy are used by Ranganathan & Foster (2001) and Siva Sathya et al (2006). The DLL is used by Ranganathan & Foster (2004). Both in the BC and in DLL, each site monitors the number of data access at the data replicas it stores, computing the number of times the fragment was requested. When such number is greater than a threshold value, a fragment replica is created at another site. In BC, the fragment replica is created at the site that has more times demanded for the considered fragment. In DLL, fragment replica is created at the site that has the least remaining work to execute. Figure 7. SLO-achievement rate
914
The obtained SLO-AR values are represented in Figure 7. All the evaluated dynamic replication methods lead to good SLO-AR. Such success ensures the quality of the used QoS-oriented hierarchical scheduling model. But the obtained results also present the benefits of the proposed QoS-aware dynamic data replication and placement, as it was the method that leads to the highest value of SLO-AR. In Figure 8 we present the measured throughput for the evaluated strategies. Once again, the proposed QoS-aware replication strategy leaded to the highest values in the two tested network configurations. This happens because such place-
QoS-Oriented Grid-Enabled Data Warehouses
ment strategy has the same objective of the used query scheduling strategy. Therefore, not only it increases the number of queries that the Community Scheduler agrees to execute but also lead to a better resource utilization than the other replica selection and placement methods. The number of created replicas per method was almost the same and no method created a huge number of replicas for the same data fragment (as show in Figure 9). In fact, the success of the QoS-aware scheduling method is mostly related to its replica placement strategy.
In Figure 10, we present the number of created replicas per site for the Hierarchical Topology. Let’s analyze site 6: such site is almost the most powerful one and it is the site that have the highest number of submitted queries. This is also the site on which the BC method created the highest number of data replicas. But as the BC method places too much replicas at such site, the scheduler assigns too much queries for site 6 (what puts it into a too high load situation and decreases performance, leading to a lower SLO-achievement rate) or schedules queries to other sites that are
Figure 8. Measured throughput
Figure 9. Number of replicas per facts table fragment - hierarchical topology
915
QoS-Oriented Grid-Enabled Data Warehouses
Figure 10. Number of facts table fragments at each site in hierarchical topology
near site 6 but that do not have the same data replicas (in such case, data movement is done during query execution, which also decreases the system performance). On the other hand, the QoS-aware placed high number of replicas at sites 3-5 than the other methods. Such sites are of medium powerful ones and are relatively near of sites 1, 2 and 6 (the three sites with the highest number of submitted queries). This way, many of the queries submitted at sites 1, 2 and 6 can be executed at sites 3-5 with a good performance (no database copy is done during execution, only results - which are relatively small in size - are transferred through sites during query execution). In contrast, the DLL strategy placed many replicas on not so powerful sites (like 10 and 7). This happened as the DLL strategy evaluated that such sites had a small size of pending work. But such sites take too much time to execute a query, and the Community Scheduler rarely assigns query execution for such sites. Therefore, the replicas created by DLL were somehow of less utility to the system.
916
cONcLUsION Grid-based data warehouses are useful in many global organizations which generates huge volumes of distributed data that should be transparently queried by the organizations participant’s. In such environment, the Grid is used as basic infrastructure for the deployment of a large distributed mostly read-only database. But due to the grid’s special characteristics, like resource heterogeneity, geographical dispersion and site autonomy, the efficient deployment of huge data warehouses over grid-connected sites is a special challenge. In this chapter, we present QoS-oriented scheduling and distributed data placement strategies for the grid-based warehouse. We discuss the use of a physically distributed database, in which tables are both partitioned and replicated across sites. The use of facts’ table partitioning and replication is particularly relevant as grid users’ queries may follow geographical related access patterns. Inter-site dimension tables fragmentation and replication are done in order to achieve good performance in query execution but also to reduce data movement across sites, which is a costly operation in grids.
QoS-Oriented Grid-Enabled Data Warehouses
Incoming queries are rewritten into another ones (tasks) that are assigned to sites by a Local Scheduler based on Service Level Agreements between the Community Scheduler and Local Schedulers The use of a hierarchical scheduling model leads to good SLO-achievement rates and also maintains site autonomy, as each site’s Local Scheduler may implement its own scheduling strategy. Dynamic data replication is very important in grid-based data bound jobs. In the grid-enabled warehouse, dynamic replication of facts table fragments or of computed chunks is important to improve the systems’ performance. The QoSoriented dynamic replica selection and placement is especially important to increase the SLOachievement rate in grid-enabled warehouses.
rEFErENcEs Akal, F., Türker, C., Schek, H., Breitbart, Y., Grabs, T., & Veen, L. (2005). Fine-grained replication and scheduling with freshness and correctness guarantees. In Proceedings of the 31st international Conference on Very Large Data Bases (pp. 565-576). Akinde, M. O., Böhlen, M. H., Johnson, T., Lakshmanan, L. V., & Srivastava, D. (2002). Efficient OLAP query processing in distributed data warehouses. In Proceedings of the 8th international Conference on Extending Database Technology: Advances in Database Technology. (LNCS 2287, pp. 336-353. Allcock, W., Bresnahan, J., Kettimuthu, R., & Link, M. (2005). The Globus striped GridFTP framework and server. In Proceedings of the 2005 ACM/IEEE Conference on Supercomputing (pp.54-65).
Alpdemir, M., Mukherjee, A., Paton, N., Watson, P., Fernandes, A., Gounaris, A., & Smith, J. (2003). OGSA-DQP: A service-based distributed query processor for the Grid. In Proceedings of UK eScience All Hands Meeting. Bird, I. & The LCG Editorial Board (2005). LHC Computing Grid Technical Design Report [LCGTDR-001, CERN-LHCC-2005-024]. Breitbart, Y., Komondoor, R., Rastogi, R., Seshadri, S., & Silberschatz, A. (1999). Update propagation protocols for replicated databates. SIGMOD Record, 28(2), 97–108. doi:10.1145/304181.304191 Chaudhuri, S., & Dayal, U. (1997). An overview of data warehousing and OLAP technology. SIGMOD Record, 26(1), 65–74. doi:10.1145/248603.248616 Chen, G., Pan, Y., Guo, M., & Lu, J. (2005). An asynchronous replica consistency model in data grid. In Proceedings of Parallel and Distributed Processing and Applications - 2005 Workshops (LNCS 3759, pp. 475-484). Chervenak, A. L., Palavalli, N., Bharathi, S., Kesselman, C., & Schwartzkopf, R. (2004). Performance and scalability of a replica location service. In Proceedings of the 13th IEEE international Symposium on High Performance Distributed Computing (pp.182-191). Costa, R. L. C., & Furtado, P. 2008. A QoS-oriented external scheduler. In Proceedings of the 2008 ACM Symposium on Applied Computing (pp. 1029-1033). New York: ACM Press. Costa, R. L. C., & Furtado, P. (2008). QoS-oriented reputation-aware query scheduling in data grids. In Proceedings of the 14th European Conference on Parallel and Distributed Computing (Euro-Par). Costa, R. L. C., & Furtado, P. (2008). Scheduling in Grid databases. In Proceedings of the 22nd international Conference on Advanced information Networking and Applications – Workshops (pp. 696-701).
917
QoS-Oriented Grid-Enabled Data Warehouses
Deshpande, P., & Naughton, J. F. (2000). Aggregate aware caching for multi-dimensional queries. In Proceedings of the 7th international Conference on Extending Database Technology: Advances in Database Technology vol. 1777 (pp. 167-182).
Huynh, T. D., Jennings, N. R., & Shadbolt, N. R. (2006). An integrated trust and reputation model for open multi-agent systems. Autonomous Agents and Multi-Agent Systems, 13(2), 119–154. doi:10.1007/s10458-005-6825-4
Deshpande, P. M., Ramasamy, K., Shukla, A., & Naughton, J. F. (1998). Caching multidimensional queries using chunks. In A. Tiwary & M. Franklin (Eds.), Proceedings of the 1998 ACM SIGMOD international Conference on Management of Data (Seattle, Washington, United States, June 01 - 04, 1998) (pp. 259-270). New York: ACM Press.
Krauter, K., Buyya, R., & Maheswaran, M. (2002). A taxonomy and survey of grid resource management systems for distributed computing. Software, Practice & Experience, 32(2), 135–164. doi:10.1002/spe.432
Foster, I. Kesselman, C. Nick, J., & Tuecke, S. (2002). The physiology of the grid: An open grid services architecture for distributed systems integration (Globus Project Tech Report). Foster, I., Kesselman, C., Tsudik, G., & Tuecke, S. (1998). A security architecture for computational grids. In Proceedings of the 5th ACM Conference on Computer and Communications Security. CCS ‘98 (pp. 83-92).
Lawrence, M., & Rau-Chaplin, A. (2006). The OLAP-Enabled Grid: Model and query processing algorithms. In Proc. of the 20th international Symposium on High-Performance Computing in An Advanced Collaborative Environment (HPCS). Lima, A., Mattoso, M., & Valduriez, P. (2004). Adaptive virtual partitioning for OLAP query processing in a database cluster. In Proceedings of the Brazilian Symposium on Databases (SBBD) (pp. 92-105).
Foster, I. T. (2001). The anatomy of the grid: Enabling scalable virtual organizations. In Proceedings of the 7th international Euro-Par Conference on Parallel Processing (LNCS 2150, pp. 1-4).
Lin, Y., Liu, P., & Wu, J. (2006). Optimal placement of replicas in data grid environments with locality assurance. In Proceedings of the 12th international Conference on Parallel and Distributed Systems - Vol 1 (2006) (pp. 465-474).
Furtado, P. (2004). Workload-based placement and join processing in node-partitioned data warehouses. In Proceedings of the 6th International Conference on Data Warehousing and Knowledge Discovery (LNCS 3181, pp. 38-47).
Loukopoulos, T., & Ahmad, I. (2000). Static and adaptive data replication algorithms for fast information access in large distributed systems. In Proc. of the 20th Intern. Conference on Distributed Computing Systems (ICDCS).
Furtado, P. (2004). Experimental evidence on partitioning in parallel data warehouses. In Proceedings of the 7th ACM international Workshop on Data Warehousing and OLAP (pp. 23-30).
Nieto-Santisteban, M. A., Gray, J., Szalay, A., Annis, J., Thakar, A. R., & O’Mullane, W. (2005). When database systems meet the grid. In CIDR (pp. 154-161).
Haddad, C., & Slimani, Y. (2007). Economic model for replicated database placement in Grid. In Proceedings of the Seventh IEEE international Symposium on Cluster Computing and the Grid (pp. 283-292). IEEE Computer Society.
Park, S., & Kim, J. (2003). Chameleon: A resource scheduler in a data grid environment. In Proceedings of the 3st international Symposium on Cluster Computing and the Grid. IEEE Computer Society.
918
QoS-Oriented Grid-Enabled Data Warehouses
Poess, M., & Othayoth, R. K. (2005). Large scale data warehouses on grid: Oracle database 10g and HP proliant servers. In Proc. of the 31st international Conference on Very Large Data Bases (pp. 1055-1066). Ranganathan, K., & Foster, I. (2004). Computation scheduling and data replication algorithms for data Grids. In Grid resource management: State of the art and future trends (pp. 359-373). Norwell, MA: Kluwer Academic Publishers. Roy, A., & Sander, V. (2004). GARA: A uniform quality of service architecture. In Grid resource management: State of the art and future trends (pp. 377-394). Norwell, MA: Kluwer Academic Publishers. Siva Sathya, S., Kuppuswami, S., & Ragupathi, R. (2006). Replication strategies for data grids. International Conference on Advanced Computing and Communications. ADCOM 2006 (pp 123-128). Stöhr et al, 2000 Stöhr, T., Märtens, H., & Rahm, E. 2000. Multi-Dimensional Database Allocation for Parallel Data Warehouses. In Proceedings of the 26th international Conference on Very Large Data Bases. 273-284. Transaction processing council benchmarks (2008). Retrieved from http://www.tpc.org Venugopal, S., Buyya, R., & Ramamohanarao, K. (2006). A taxonomy of Data Grids for distributed data sharing, management, and processing. ACM Computing Surveys, 38(1), 3. doi:10.1145/1132952.1132955 Watson, P. (2001). Databases and the grid. UK e-Science Technical Report Series. Wehrle, P., Miquel, M., & Tchounikine, A. (2007). A grid services-oriented architecture for efficient operation of distributed data warehouses on Globus. In Proceedings of the 21st international Conference on Advanced Networking and Applications (AINA) (pp. 994-999).
Wolski, R. (1997). Forecasting network performance to support dynamicscheduling using the network weather service. In Proceedings of the 6th IEEE international Symposium on High Performance Distributed Computing (August 05 - 08, 1997) (pp. 316). IEEE.
KEy tErMs AND DEFINItIONs Community Scheduler: It is a specialized middleware, responsible for matching users’ jobs requirements with the available resources in a grid by the interaction with local schedulers. It assigns jobs to sites through a process that, besides requirement matchmaking, can also comprise some kind of negotiation with local schedulers. Sometimes it is also called Resource Broker or Meta-scheduler. Data Grid: A Grid environment whose services are mainly used to deal with (including to store, process, replicate and move) huge volumes of distributed shared data or over which are executed grid-based applications that consume or generate huge volumes of data. Grid: The term Grid is a basic infrastructure used to interconnect and provide access to widely distributed, and possibly heterogeneous, shared resourced that may belong to distinct organizations. Grid Resource Management System: It is the resource management system that runs over the grid and is used to manage the available shared resources, providing a wide range of services (like efficient data movement, replica management and remote job submission and monitoring) to grid-based applications. Grid-Enabled Databases: A set of Database Management Systems (DBMS) which are physically distributed and are queried by grid users through the use of a middleware together with a Grid Resource Management System. Quality-of-Service (QoS): The term was first coined in the networking-related field in order to identify the ability of a certain technology to do 919
QoS-Oriented Grid-Enabled Data Warehouses
resource reservation in order to provide different priority to distinct applications or users. More recently, it can also be used as a users’ satisfaction degree or the ability to provide predictable performance levels that are according to users’ expectations. Service Level Agreement (SLA): An agreement that is firmed between a service provider and a service consumer, and which defines the service levels (possibly in terms of Service Level Objectives) that should be provided for several
characteristics (like performance and availability) of the provided services. It may also define guarantees and penalties (for the case of non compliance with the SLA). Service Level Objective (SLO): It is a target value used to measure the performance of a service provider in what concerns to a specific characteristic, like response time or throughput. Its definition may also contain information about how the SLO is measured and the measurement period.
This work was previously published in Data Warehousing Design and Advanced Engineering Applications: Methods for Complex Construction, edited by Ladjel Bellatreche, pp. 150-170, copyright 2010 by Information Science Reference (an imprint of IGI Global).
920
921
Chapter 4.2
EIS Systems and Quality Management Bart H.M. Gerritsen TNO Netherlands Organization for Applied Scientific Research, The Netherlands
AbstrAct
INtrODUctION
This chapter discusses the support of quality management by Enterprise Information Systems. After a brief introduction in ISO9001, one of the principle and widest-spread quality management frameworks, this chapter discusses the design and implementation of a typical QMS and in particular of key performance indicators, indicating the present state of performance in the organization. While analyzing design and implementation issues, requirements on the supporting EIS system will be derived. Finally, the chapter presents an outlook onto future developments, trends and research. This chapter reveals that key performance indicators can be well integrated in EIS systems, using either relational or object-oriented storage technology.
Quality Management systems
DOI: 10.4018/978-1-60566-892-5.ch017
Over the last decades, enterprises and other organizations from large to small have come to implement quality management systems (QMS). Large Scale Enterprises (LSE’s) and Small and Medium Enterprises (SME’s) alike, decided to apply QMS to get grip on the product and business process quality level customers nowadays expect. Many SME’s initially did so “because customers ask for it”. While customer satisfaction is a pivotal factor indeed, learning to master and apply quality principles correctly also assists in increased employee involvement and productivity, preventing defects from occurring, reducing costs and production times. The key to achieving this is a timely and correct alignment of
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
EIS Systems and Quality Management
the delivered quality in business processes at all levels in the organization, board to shop floor. The information needed to know and control quality performance goes hand in hand with other daily operational information within the organization and consequently, quality information will typically be residing in emerging Enterprise Information Systems (EIS). This is why in this chapter we will discuss quality management within the context of EIS systems, seen from the angle of SME’s. A QMS is not the same as an information system; an information system (e.g., an EIS system) supports the implementation of a QMS.
ISO 9001 does not prescribe any quality management system in particular but frames the process of designing, implementing and operating one, defining guiding principles, requirements and key elements it ought to contain for proper functioning: the what to, not the how to. Organizations can tailor and scale a QMS framework to their own needs and chose the implementation they see fit, as long as the standardized good quality management practices remain honored. The detailed design and operating of a QMS is critical to its success, however, and ultimately critical to the success of the organization as a whole.
IsO 9001
research Questions and Approach
One of the principle and widest-spread standards to design and implement a QMS is ISO 9001, belonging to the ISO 9000-family of standards. The most recent version of this standard is ISO 9001:2008. Figure 1 based on the 2006 ISO annual survey figures (ISO, 2006) shows the world wide adoption of the standard.
Designing a fit-for-purpose QMS requires thorough understanding of business strategy and business processes and the readiness to align the QMS with the business processes, vice versa. Generally, alignment and fine tuning is something for the long haul, and of continuous concern. Niven, in (Niven, 2005) estimates that at present at most 10% of
Figure 1. Global uptake of ISO 9001:1994 (solid bars) and 9001:2000 (hatched bars) up to 2006; China is now the country with the largest number of ISO 9001-based QMS (approx. 200000), comparable to Europe as a whole. Source: (ISO, 2006).
922
EIS Systems and Quality Management
the organizations actually achieves their strategic objectives. The problem is often (particularly for small SME’s) that expertise is lacking and attention slips away after the QMS has been introduced; compare (Woodhouse, 2004). This is like buying an advanced piece of equipment without learning to operate it optimally: the cost is taken while the deeper benefits never come in sight. Ultimately, quality is something to be embedded and fused with daily operational processes in order to be effective and efficient; quality drives productivity.
• •
•
•
Research Questions In view of emerging EIS systems, the question is how to assure proper and lasting alignment and how to obtain a coherent view on quality performance across the organization, such that it can be managed and kept inline with the quality targets defined. More specifically, research questions arising, are:
•
•
•
• •
•
•
How to define a quality performance strategy and measurable quality performance targets; how to store this typically unstructured strategy description in an EIS system? How to measure actual quality performance of each of the business processes? How to design and implement the quality performance indicators (often referred to as: key performance indicators, or KPI’s) for each of the business processes in the organization, including strategic control processes? How to determine a working definition, a proper format and a fit-for-purpose accuracy for each of the performance indicators; does the EIS systems support storage of this format? How to compute and aggregate actual quality performance figures from EIS system residing data?
•
•
•
How to store recorded quality performance in the EIS system? How to determine correct timing and predictive power for lead quality performance indicators, allowing off-target performance and defects to be remedied before repair becomes impossible? How to determine whether the hierarchy of performance indicators coherently and unambiguously supports strategic control? How to attain a flexible QMS implementation in which organizational change is adopted swiftly and which allows for appropriate up- and downscaling of activities? How to retrieve the correct actual quality performance data again from the EIS system; How to archive and maintain a history of quality performance data within the EIS system, allowing reconstruction at any one time in the past of the then present quality performance? How to effectively and efficiently build up an evidence-based track record of quality performance within the EIS system, for auditing, approval and endorsement purposes? How to combine secondary EIS data with quality performance data in root cause analysis so as to explore the deeper causes of ill-performance? How to combine performance data across organizational borders, in case of supply chain, delivery chain or partners delivering bundled products;
Quality in this context is not just product quality; it is also about controlling all processes so that the outcome is under control and about management aspects targeting customer satisfaction. A more accurate definition of quality will be given further down. A (key) performance indicator is a metric expressing the result of a measurement of
923
EIS Systems and Quality Management
the performance of a business process. KPI’s take the form of business data typically processed and stored in a database on a computer system. Key performance indicators will be explained in detail, in the sections ahead. The term key performance indicator (KPI) has become so common in practice that we also use KPI to refer to quality performance. A lead performance indicator expresses a future performance, for instance expected annual turnover based on current performance. Lead indicators and lag indicators (which record results in the past) will also be discussed in detail in the sections ahead. In the past two decades, great advancements have been made with respect to the above research questions, see for instance (Adams & Frost, 2008; Ahmad & Dhafr, 2002; Cobbold & Lawrie, 2002b; Hernandez-Matias, Vizan, Perez-Garcia, & Rios, 2008; Hubbard, 2007; Kaplan & Norton, 1996b; Kaplan & Norton, 2001b; Kaplan & Norton, 2001a; Kaplan & Norton, 2001a; Kaplan & Norton, 2006; Kaplan & Norton, 2004a; Niven, 2005; Woodhouse, 2004). None of these contributions discusses the above research questions in the context of EIS systems, however. Consequently, it is unclear whether: •
• •
•
924
EIS systems are suited to store structured and unstructured pieces of data and information related to a QMS; EIS systems adequately support storage of causal and other relationships; EIS systems allow lead indicators to quantitatively forecast future scores, trends and performance; EIS systems can provide evidence-based quality track records; can it keep track of approvals and results of plan-do-check-act cycles for instance; can it reconstruct quality performance at any one moment back in time, for instance in case of a customer claim or complaint? Can it show (evidence of) continuous improvement over a certain period for instance?
•
•
•
• •
•
EIS systems do support the access rights and roles to grant all employees proper access to the shared parts of the QMS and quality performance information, and to block improper modifications or manipulations of managed data; EIS systems are efficient enough to support recording of performance data and upand downscaling efficiently; EIS systems can combine quality performance data of suppliers, wholesalers, retailers, etc., all having impact on customer perceived quality and satisfaction; EIS systems support the kind of transactions needed to operate a QMS; EIS systems support reporting of quality performance at all levels in the organization; EIS systems require different skills when containing quality performance data;
This chapter addresses these issues. In order to understand the requirements on the EIS system we need to understand how quality performance is measured, the role of key performance indicators and bad performance alerts therein and finally the role of root cause analysis, in case management intervention is needed.
Research Approach Approaches to design effective QMS systems covering all relevant aspects in an organization have been primarily developed in the nineties and the beginning of this century (see the early work in (Kaplan & Norton, 1992). Kaplan and Norton take the constantly transforming organization and the organization’s strategic planning as the starting point, transforming organizational strategy into measurable strategic objectives to which a target can be assigned. A Strategy Map (Kaplan & Norton, 2004a; Kaplan & Norton, 2004b; Kaplan & Norton, 2004c; Kaplan & Norton, 2004c) and a Destination Statement (Cobbold & Lawrie,
EIS Systems and Quality Management
2002a) define the strategic change desired and a time frame for the transition, while a Balanced Score Card (BSC) is commonly used to combine different views and aspects of the transition in a single overview (Cobbold et al., 2002b; Kaplan et al., 1992; Kaplan & Norton, 1993; Kaplan & Norton, 1996a; Kaplan et al., 1996b; Kaplan et al., 2001a; Kaplan et al., 2001b). This makes the BSC one of the output forms, to be delivered by the EIS system; more on this later. Strategic objectives to transform the organization and the products and services it delivers, can only be successfully accomplished if its critical success factors (CSF’s) are satisfied. Causal relationship analysis reveals how one critical success contributes to the next critical success. Critical success factors can be monitored using KPI’s. In fact, this cascade shows how strategic planning is made measurable and manageable down to the operational level; the bottom up cascade of KPI’s reports how well the organization’s actual performance contributes to realizing the desired transformation at strategic level. This approach leads to very flexible, effective quantitative quality management system implementations and combining them with the efficiency of EIS systems appears very attractive. Flexibility is an important aspect in the context of the growing demand for mass customization and agile manufacturing; more on this later. For the research approach followed in this chapter, we will adopt the above outlined approach and analyze in every step the requirements and consequences for the implementation in an EIS system. This chapter thus seeks to present answers to the above defined research questions. Not all aspects are equally important: emphasis will be on the design and embedding in an EIS of the KPI’s.
Organization of this chapter The remainder of this chapter is organized as follows: firstly, a brief overview of ISO 9001 will be presented, among other things discussing its
guiding principles and main elements. After this background information, we further focus on the design and characteristics of KPI’s needed to learn the requirements for their embedding in an EIS system. The characteristics of properly designed KPI’s will be discussed, with special attention to the dynamics (the timing, say) of lead indicators. Concluding this chapter, an outlook onto the future of performance-based management will be given.
IsO 9001 OVErVIEW brief History of IsO 9001 Quality initiatives go as far back as the fifties and Japan is generally seen as the cradle of industrial quality programming. Deming’s early work on statistical process control (SPC) is generally seen as the birth of quality control, one of the main elements of quality management. The idea behind quality management efforts is that by controlling the variability in every process, a greater consistency in output results can be obtained. During the seventies and eighties, quality management and QMS systems gained worldwide attention. Starting out with US Army and NATO initiatives, most notably AQAP, the idea of quality management gradually invaded industry at large. Out of various national standardization efforts, the desire grew to come to a standardized worldwide quality management framework. To that extent ISO and more specifically its TC 176/SC2 committee started working on this global standard, beginning of the eighties, resulting in the first version of the ISO 9001 in the late eighties. After its initial 1987 version, ISO 9001 was updated in 1994 (ISO 9001:1994) and 2000 (ISO 9001:2000) and lastly in 2008, formally tagged ISO 9001:2008. Today, over a million organizations worldwide have adopted one of the above management frameworks, with highest penetrations in the realms of engineering and material technologies (International Organization for Standardization
925
EIS Systems and Quality Management
ISO, 2008). Some 100 countries participate in and/or follow the ISO 9001 standardization effort nowadays, with over 150 countries being an ISO-member and users spread across nearly 180 countries (Figure 1).
Other Quality Management standards With environmental objectives becoming ever more compelling, ISO 9001-based QMS systems are occasionally replaced or complemented by ISO 14001-based systems. The ISO 14000 family of standards deals with environmental management. Total Quality Management (TQM) can be regarded as a broader and deepened form of quality management. It lays strong emphasis on quality awareness everywhere in the organization and in its supply and delivery chains. It shares, among others, the quality performance management characteristics with ISO 9001.
Main Principles of IsO 9001 Satisfied customers stand central in the ISO 9001 and all quality management is directed to just that. Keeping customers satisfied is not just a matter of products being free of deficiencies. The customer’s explicit and implied needs and expectations should be satisfied by the product and evidently, the product must comply with regulatory and generally implied sustainability demands. All this makes quality a subjective attribute that depends on customer perception. That is why an organization must make quality explicit at beforehand and approve to live up to it so that customers can rely on the design, manufacturing, environmental footprint, servicing, etc. to be inline with their expectations. All businesses processes should be mastered such that process output is guaranteed to be within specified quality bounds, contributing to a customer satisfying product, now and in the future. No organization is static,
926
whether a private enterprise, public agency or non-governmental organization: an organization must constantly strive to a lasting and better satisfaction of its customers (shareholders, partners, the public, the environment …). The constant transition organizations find themselves in is also approached as a quality controlled business process (the strategic control process) and thus subject to quality management. Main principles contributing to the above philosophy are: • • •
•
• • • • •
A written QMS must be available and accessible to every employee and customer; Targeted actions and performance must be verified and validated periodically; The QMS must take a quantified approach: it must regularly record quality performance scores (or: KPI scores) and set them off against agreed target values; A process-based approach, with plan-docheck-act cycles (PDCA) to drive quality and to repair defects; Procedures to report and handle defects; Active promotion, commitment and involvement of senior management; Central roles and responsibilities shall be formalized and assigned; Regular auditing (internally, externally, or by the customer) shall be conducted; The QMS must be effective and efficient and support continuous improvement.
There is no room and no need to go into more detail here. In the sections following, the relevant details will be further discussed in relation to their embedding in an EIS system.
Main Elements of a typical IsO 9001-based QMs A typical ISO 9001-based QMS consists of the following main elements:
EIS Systems and Quality Management
•
• • • • • • •
•
A handbook describing the QMS in detail, containing mandatory procedures such as a reporting and repairing defects procedure, continuous improvement and auditing procedures, as well as other operational procedures; A Strategy map, describing the organization’s planned transition; Balanced Score Card to consistently report on KPI scores; A collection of CSF’s and the causal analysis of the hierarchy of CSF’s; Infrastructure to regularly measure actual performance through KPI scores reporting; A data collection of KPI scores and targets; Decision making to drive plan-do-checkact cycles and continuous improvement; An auditing regime to audit the adherence to the quality management as laid down in the handbook; Certificate of compliance to the own ISO 9001-based QMS, to assure customers, shareholders or anyone else of this compliance.
Notice that ISO 9001 does not prescribe all these elements in detail and leaves room for alternatives for some of these elements. This holds for instance for a Strategy Map and for a Balanced Score Card. Most organizations will prefer to integrate these elements in their ISO 9001-based QMS, however, and their use has become common practice. Also note that ISO 9001 does not prescribe storing KPI scores in a computer system, let alone in an EIS. When preferred so, KPI scores may be paper-based, but in practice virtually all organizations will opt to store them in a database on a computer.
FrOM strAtEGy tO KPI’s Definitions In this chapter, we will understand quality in a context of producers and consumers of products and services. Quality is not limited to product quality: it extends through all business processes involved in the creation of the product, from component design in the supply chain to the services on a delivered product. Quality in this context is defined as the degree to which the inherent characteristics of a product or a service and the way it reaches the consumer fulfill consumer’s needs and expectations. A quality management system (QMS) is a system supporting a systematic approach to monitor, control and manage an organization’s quality performance. A QMS is driven by an underlying information system; an information system is a set of cooperating components with a structural organization, with the aim to capture, process, manage and distribute data and information. A critical success factor (CSF) is a core area or a limited number of areas in which satisfactory results will induce successful results in the environment around these area(s). A CSF commonly represents that core action, achievement or performance that if completed successfully, sets forth successful achievement of the rest. A (key) performance indicator (KPI) is a metric expressing the result of a measurement of the performance of a business process. Quality management following ISO 9001 takes a quantitative approach, which entails a need for a-priori known and quantified quality targets. KPI scores are set off against quality targets and should be within an a-priori defined bandwidth around the target value. A Balanced Score Card (BSC) is a management overview that groups KPI values logically, according to the perspective they cover, typically: customer-oriented processes, operational performance, financial performance, and what is commonly denoted as learning (experience, knowledge and growth). The
927
EIS Systems and Quality Management
BSC (Kaplan et al., 1992) not only reports on KPI scores, but has been designed such during QMS design, that it also assists in the understanding of collective performances and mutual relationships among individual objectives: what drives what. Business Activity Monitoring (BAM) is the acquisition, processing and presentation of real time information (e.g., KPI scores) on activity and performance within an organization.
A Generic stepwise Approach Following the research approach as outlined in the Introduction and the above definitions, we can now compile the following generic approach to designing and implementing a QMS, linking KPI’s to strategy, and the realization of strategic transition to actual performance. Following the Kaplan and Norton approach top down – bottom up in an iterative process, we arrive at the steps as in Table 1. In practice, the above stepwise approach is neither fully top-down nor bottom-up, but a middle-out process that iterates until an effective and efficient network of coherent indicators has been established. A scorecard design and the indicators populating it are in fact strongly intermingled, and simultaneously and iteratively developed in
the above marked process. Once the design has been validated, it can be operated.
stEP 1: starting from strategy At board level, a balance has to be found between strategic planning and strategic control. The pressure on organizations to timely adjust their strategy is ever increasing, due to reasons such as globalization, sustainability, exploding energy costs, rapidly changing consumer preferences, mass customization and many others. First step for an organization is to assess its present state of excellence, for instance by means of capability-maturity-modeling (CMM), benchmarking, position auditing or using any other Business Excellence Model approach. A Balanced Scorecard (BSC) captures central KPI’s from various strategic perspectives: customer-oriented processes, operational performance, financial performance, and what is commonly denoted as the learning, (experience, knowledge…) and growth perspective. The anatomy of the BSC looks like in Figure 3. The relative weights put on each of the perspectives may vary from organization to organization and is a matter of how organizations see themselves, their orientation, and the maturity level they position themselves in. A successful
Table 1. stepwise translation of strategy and strategic change into KPI’s and verification that the KPI’s as designed contribute to understanding quality performance up to board level. STEP 1: At board level, strategic objectives are being outlined in the Strategic Planning process and translated into measurable objectives, for instance by collecting them in a destination statement, describing what should be realized within say three years from now. Strategic change (transition) is thus put under Strategic Control. Global CSF’s are being identified, a first version of a BSC is being designed STEP 2: At the management levels below board level (Production Control), these objectives and their CSF’s are further worked out in smaller (measurable) goals, linking up the operational processes. A cause-effect analysis is used to identify and verify causal relationships (Causal Analysis) between CSF’s at subsequent levels; STEP 3: Per CSF and for each operational process, one or more KPI’s are determined, along with all their characteristics. STEP 4: Next, bottom up a verification and calibration process is conducted to learn how lower level KPI scores aggregate at the next higher level and how their indication contributes to the performance measurement at the next level. Aspects like interpretation, accuracy, missing values, lead times etc. are verified and validated; STEP 5: Finally, arriving back at the board level, the final verification is being conducted, which should reveal as to whether the progress towards reaching the destination can be monitored and controlled indeed (Strategic Control), whether ill-performance is timely signaled and risks and opportunities can be monitored and controlled adequately on the basis of BSC and its KPI’s.
928
EIS Systems and Quality Management
Figure 2. Generic approach to design and implement a QMS, using an EIS system.
Figure 3. Anatomy of a BSC. What drives what (arrows) depends (among others) on the organization’s maturity level, orientation and ambitions.
transition to a next maturity level can be interpreted as a demonstration of continuous improvement. A first version of the BSC is typically designed in parallel to the design of Strategy Map and Destination Statement. More detailed scorecards may be designed afterwards and cascading scorecards may be used to aggregate results from various levels of the organization (Bukh & Malmi, 2005). Suppliers performances can be also be linked up (Angerhofer & Angelides, 2006; Bostorff
& Rosenbaum, 2003; Brun, Caridi, Salama, & Ravelli, 2006). What (KPI in a) perspective drives what other (KPI in a) perspective, is among other things depending on the organization’s maturity level. Many of today’s organizations find themselves in a stage in which operational performance drives financial performance. Learning organizations (knowledgeoriented, human capital-oriented organizations) go one step further: self-organization and learning
929
EIS Systems and Quality Management
competence drive internal performance which in turn drives financial. Learning organizations will typically work out and weight the Learning & Growth box correspondingly. Overlooking the above considerations, the following requirements on EIS systems to support the implementation of QMS systems, emerge: REQUIREMENT 1: Apart from structured information, an EIS system must also be capable of containing, handling and archiving unstructured information like the Strategy Map and Destination Statement REQUIREMENT 2: An EIS system must be capable of storing and retrieving CMM-scores as input information for the next assessment of the CMM-stage. Factors contained in the EIS system must be labeled as enablers and results; Maturity level assessments must be validated and validation shall be supported by the EIS system REQUIREMENT 3: The impact of amendments to one of the strategic objectives in terms of CSF’s and KPI’s affected, can be assessed by the EIS REQUIREMENT 4: The other way around, each KPI and each CSF is linked to (one or more) strategic objectives; these relationships can be described for the purpose of causal analysis REQUIREMENT 5: Actors can be described on a per-data item or a per-relationship basis, along with access rights, modification and archival roles included REQUIREMENT 6: Each of the entities and the relationships has an ownership defined REQUIREMENT 7: Each occurrence of KPI scores (and possibly also CSF qualification) has an explicit validation, by a hierarchy of validators (explicit roles); Each validation shall be assigned a time stamp REQUIREMENT 8: An EIS system must be capable of reporting quality performance in a form resembling the designed BSC
930
stEP 2: towards csF’s Research on critical success factors (CSF’s) is somewhat interwoven with that of KPI’s (Forster & Rockart, 1989; Rockart, 1986). Today, there is a substantial body of literature available on critical success factors, performance indicators and measurement as a whole. See for instance (Niven, 2005) and (Hubbard, 2007). For their statistical analysis background, a classic resource is (Dixon & Massey, 1983). Requirements on EIS systems: REQUIREMENT 9: An EIS system shall be capable of containing, handling and archiving causal analysis results as unstructured information; Assumptions, conditions and limitations shall be stored along with the causal analysis itself REQUIREMENT 10: Historic (regular) verification and compliance analysis can be stored as structured information in the EIS system REQUIREMENT 11: CSF failure analysis can be stored as unstructured information in the EIS system REQUIREMENT 12: Each occurrence of CSF qualification has an explicit validation, by a hierarchy of validators (explicit roles of type Actor); each validation is assigned a time stamp and can be traced back REQUIREMENT 13: Once validated, CSF qualifications shall be protected against amendment; they can only be superseded by a revision REQUIREMENT 14: Release and version management shall apply to all CSF qualifications; REQUIREMENT 15: Each CSF has one or more relationships with strategic objectives and with one or more KPI’s; Relationships of the type many-to-many shall be supported REQUIREMENT 16: The sensitivity analysis of CSF compliance to KPI scores can be recorded with the CSF description in the EIS system
EIS Systems and Quality Management
stEP 3: towards KPI’s Some organizations tie CSF’s and KPI’s one to one; whilst it is good practice to specify CSF’s for each strategic objective and cover all CSF’s by one or more KPI, additional KPI’s may be required, e.g. to monitor performance in lower level processes. Abundant KPI collection on the other hand, should be avoided, just like over-accuracy and over-confidence. Accuracy means a low bias of real and measured data. Nowadays, organizations typically have some 10..25 CSF’s and KPI’s and experience says that in any one decision making process no more than a few (4..8) KPI’s should be involved (Woodhouse, 2000). Pioneering work on KPI’s has been done in the early sixties by Daniel (Daniel, 1961). The development of Lean Manufacturing (LM) and Total Productive Maintenance (TPM), mainly in the Japanese automotive industry during the seventies, led to the development of principle compound KPI’s like Overall Equipment Effectiveness (OEE) and Total Effective Equipment Productivity (TEEP) (Ahmad et al., 2002; Hubbard, 2007; Mather, 2003; Woodhouse, 2000), later supplemented by On Time In Full from Supplier (OTIFS) and On Time In Full to Customer (OTIFC) (Ahmad et al., 2002), covering supply and delivery chain respectively. The OEE is a KPI widely used in
industry, to express plant performance, equipment performance, system performance, but also service providing performance, training performance, expert consultancy performance, or anything similar. OEE combines three major factors in a single KPI: availability, performance and quality, TEEP ads a fourth factor; the loading. OEE seeks to jointly measure planned productive time, performance in that time compared to the nominal performance and good quality production rates. Defined like this, world class OEE’s are approx. 85%, resulting from >99% quality, approx. 95% performance and 90% availability. A 100% OEE means no downtime (other than scheduled), no slow production and no defects. Calculation examples can readily be found in literature and on the Internet. OEE computations are also supported in many commercial software systems, and can be integrated with many commercial ERP-systems and Plant Management Systems.
the sMArt Paradigm In the design of KPI’s, the SMART-paradigm is frequently being practiced, e.g. (Ahmad et al., 2002). A SMART KPI has the following properties:
Table 2. Left to right: building op compound types of KPI’s out of simple types and their evolution into benchmark KPI’s like OEE. Left of the solid bar are the structured KPI types, right of the solid bar are the unstructured types. They can either be processed into structured types or stored as unstructured raw data. Structured Simple - Elapsed time - Lead time - Cost/expenditure - Unit count - Employee count - Customer count - ...
Compound - Rates - Specific cost - ABC cost - Stocks - Fees/wages - Customer profiles ...
Unstructured Standard/ best practice/ benchmark - OEE - TEEP - OTIFS - OTIFC - ...
Assessment type
Complex - Data mining - Web log pruning - Questionnaire - Interview - Benchmarking - Reviews - Audit - ...
Self-assessment Mutual assessment Certification
931
EIS Systems and Quality Management
Table 3. SMART KPI characteristics. S
pecific
The object of measurement must be unambiguously specified
M
easurable
What the KPI is designed to quantify must be measurable and an adequate measurement method must be available
A
chievable
The measurement must be achievable in the time frame given at reasonable cost at the accuracy specified
R
elevant
The KPI must add to an assessment of the performance result and be relevant to the decision making process, in the way designed
T
Ime-based
The KPI must be measurable as long and as often as required and at the frequency needed. Discrete measurements must be comparable against the measurements at other moments in time, to monitor its development
specific KPI’s Occasionally, sheer competing KPI’s can be found. As an example, consider a research organization’s ICT-network. The yearly operational cost per seat of the network may be one KPI (target: < 8000 € per seat per year) while yearly productive research hours may be another KPI (target: > 1200 hours per researcher per year) and the quality of research (target: >85% satisfactory to the customer) a third. Perfect corrective and preventive maintenance by the ICT department may easily bring that first target in reach. But at the same time, excessive network down time may obstruct reaching productive research hours (second KPI scores) and jeopardize the quality of work. By combining the three KPI’s into a single OEE (Table 2), optimization can be done by lowering the one or raising the other but not at the expense of the others and whilst optimizing the common (global) outcome. Neither availability of the network alone, nor performance in terms of productive hours alone, nor quality alone is specific enough for the overall objective; only the OEE, taking all three as input, multiplying them, is specific enough.
Measurable KPI’s A well-known quote in quantified quality management reads (Kaplan et al., 2004c): You can’t manage what you can’t measure ...
932
... and you can’t measure what you can’t describe In the context of this discussion, a measurement is defined as follows (Hubbard, 2007): a measurement is a set of observations that reduces uncertainty where the result is expressed as a quantity. Notice that a measurement is never an exact number: there is always some uncertainty associated with each measurement. A clear and concise description is needed of what we want to measure exactly: the object of measurement. Just a name of some phenomenon or variable (indicator) is not enough. The same goes for a measurement method. Apart from the quite measurable phenomena, organizations have a number of less-obviously-measured-but-vitalto-know things, like quality of management and employee motivation. In modern literature, these phenomena are commonly known as intangibles, e.g. (Hubbard, 2007).
Lag and Lead Indicators KPI’s can be subdivided in lag and lead indicators (Nudurupati, Arshad, & Turner, 2007; Woodhouse, 2000). A lag indicator reports on a past performance, lead indicators are indicators that flag the advent of an event or state while emerging. Examples are: •
Tooling speed decrease may indicate tool wear, causing productivity to be in peril;
EIS Systems and Quality Management
• •
•
A rising chisel temperature may indicate wear and breakdown to arrive; Decreasing income tax agency website visits may indicate a better understood income tax form and consequently less processing and reviewing capacity needed at the agency next year; Employee dissatisfaction may be a lead-in for employee absence due to illness.
Lead indicators require careful (lead) timing, depending on the underlying process dynamics and the intervention model. With process dynamics being such that the targeted value (controlled object) can grow out of control in say 4 days, a monthly KPI is useless as a predictor. An hourly or perhaps a daily KPI would be sufficient, allowing for timely intervention. Critical alert levels and lead time to intervention should also be taken into account. Lead time to intervention (total response time included) should be sufficient to allow for intervention as designed to unroll. Timing is generally not a (big) issue for lagging KPI’s, which commonly average or sum up and log past performances over some time interval. Figure 4 shows an example of timing of a lead variable. The tooling speed indicates tool wear, which must stay above a critical bottom
level to obtain proper productivity. According to the measured KPI values (black solid dots), this level is predicted to be reached after 3-4 days; the predicted failure point is indicated in the diagram. The alert level is predicted to be reached slightly after 3.4 days; an alert with the KPI score at day three allows for intervention lead time of 0.8 days. If not sufficient, the alert level can be raised and the alert will be issued earlier. This example is highly simplified, to illustrate the various aspects discussed inhere. In real practice, correct timing of lead variables is much more complicated. Furthermore, it is good practice to verify and validate adequate functioning of the KPI, including timing and alert levels (the dynamics of the KPI). Scenario play, Monte-Carlo simulation and determining confidence intervals can help to estimate reliable alert and intervention levels, so that the actual maintenance moment is right to stay on-target. The whole process of alerting, responding and scheduling of the intervention (in total: lead time to intervention) must fit in the time span between alert and failure point, together with the intervention time itself. For further details, refer to (Brun et al., 2006; Dixon et al., 1983; Edgar, 2004; Hernandez-Matias et al., 2008; Hubbard, 2007).
Figure 4. Timing example for a tooling speed KPI. After 3 to 4 days, the tooling speed sinks below a critical bottom level and the tool needs replacement and/or readjustment. Measuring the KPI value daily predicts that failure point (here, after approx. 3.8 days) with sufficient accuracy to allow a timely alert for intervention.
933
EIS Systems and Quality Management
Achievable KPI’s Occasionally, the ideal KPI cannot be measured in the process, while a close-to-ideal KPI is readily available: a matter of what we want to know versus what we can tell you (Figure 5). The trade off is the cost (risk) of not knowing (exactly) what needs to be known, versus using what is readily available. A KPI does not need to be perfect and can never reveal everything about the true state of a process much like a dashboard does not tell every detail about a car, but enough to drive it safely.
KPI’s must be separated from analysis information to drill down the causes of malfunctioning and off-target performance. Taking a traffic light as a metaphor; you do not need to know exactly why the traffic light turned to red in order to drive home safely. It may be interesting to know but basically, all you need to know for a safe trip is: green means drive on, red means stop. The same holds for KPI’s and the accuracy and uncertainty associated with a KPI. Accuracy is a characteristic of a measurement having low systematic error, whereas precision refers to a low random error
Figure 5. Finding KPI’s means balancing what we need to know and the penalty of not knowing versus what we can tell you and the value of that information.
Figure 6. KPI scores versus actual state; erroneous missing of bad performance (type-II errors) should be zeroed out. False red alerts (type-I) are costly and slow down performance, but are not immediately catastrophic. The balance is a costs/risk tradeoff.
934
EIS Systems and Quality Management
(Hubbard, 2007). Simplicity of computation and interpretation is to be traded off against chances of a false alert (a false red, taking a traffic light metaphor again) or an unjustified performance OK (unjustified green). See Figure 6. What false rates to tolerate is a matter of balancing the costs and the risks. This brings us with the issue of the value of knowing and the cost of not knowing: the more we know, the smaller the uncertainty and the better the decision making, but also: the higher the cost. Where is the tradeoff? What are the chance and the cost of being wrong? Generally, lost opportunities are capitalized and information that can reduce the chances of missing an opportunity is assigned a value (Hubbard, 2007; Woodhouse, 2004). The cost of being wrong is the difference between the wrong chosen alternative (based upon the current information) and the best alternative (should one have had perfect information). Opportunity Loss (OL) is the cost made for an alternative that turns out to be wrong, Expected Opportunity Loss (EOL) is thus the chance times the cost of choosing a wrong alternative. Reducing the uncertainty about the best alternative reduces the chances of making the wrong choice and hence the EOL. The difference in EOL before and after additional measurement is the Expected Value of Information (EVI) resulting from those measurements. Extra
measurements pay off as long as they cost less than the EOL reduction gained (Figure 7). This theoretical framework may help to determine the “right” quality of measurements and information to support decision making. Computing EVI values at beforehand is complicated, however. Hubbard suggests to compute the EVI to exclude uncertainty altogether, i.e. computing the Expected Value of Perfect Information (EVPI). Since it excludes the entire uncertainty (EOLafter = 0), by definition EVPI is the EOL of the chosen alternative without additional information. Examples can be found in (Hubbard, 2007).
standardized and benchmark KPI’s For many applications, lists of useful KPI’s have been compiled and disclosed through internet. Branches and professional communities in chemistry, construction, health care, etc. are starting to collect and standardize KPI’s, to support self-assessment and benchmarking. For further details and pointers to resources, see (Califf, Gibbons, Brindis, & Smith, 2002; Campbell, Roland, & Buetow, 2000; Dolan, 2001; Drummond, O’Brien, Stoddart, & Torrance, 1997; Gouscos, Kalikakis, Legal, & Papadopoulou, 2007; Holden, 2006; Nudurupati et al., 2007; Puigjaner & Guillen-Gosalbez, 2008; Schultink, 2000; Van den Eynde, Veno, & Hart, 2003).
Figure 7. The reduction of EOL (solid curve) versus the additional measurement cost (dashed curve) to achieve that. As long as the loss reduction ∆loss exceeds the cost of additional measurements ∆cost, extra measurements pay off, modified after (Hubbard, 2007).
935
EIS Systems and Quality Management
stEP 4: bottom up verification A number of important issues shall be verified once the KPI’s have been designed: •
• • •
•
•
• • •
Adequacy of the accuracy of each of the KPI’s measured on the measured object with the measurement method identified (Hubbard, 2007); Confidence intervals on KPI scores; The identification of outliers on the measured KPI’s; The impact of a missing value or a late arrival; what if for some reason a KPI cannot be measured (timely)? The adequacy of the lower bound and/or the upper bound (boundary values may be one-sided, yielding a half-open interval) around the target value, and the so obtained equivalence classes of values; The association of transaction processing procedures with each of the equivalence classes and boundary values; Underperformance may require transaction processing in the EIS system that differs from the transaction processing in case performance is within limits; Reliability and effectiveness of the verification and validation procedure; The sensitivity of KPI’s and CSF’s; What to do with prospected results by lead indicators; should a production forecast be corrected as soon as a lead performance indicator predicts the next charge of products to have bad quality?
Important requirements on EIS systems: REQUIREMENT 17: KPI’s are compound values, with a target value, at least one lower or upper bound and possibly a second boundary around the target value; EIS systems shall be capable of storing this compound valued variable. Equivalence classes must be stored (or computed) by the EIS system, so as to classify the incoming
936
KPI score. Transactions supported by the EIS must be associated with each of the equivalence classes (including the boundary values); REQUIREMENT 18: For each KPI, the EIS can contain, handle and archive a measured object description, measurement method description and measurement equipment description; REQUIREMENT 19: The accuracy and confidence interval description (unstructured) of each of the KPI’s can be stored in the EIS system; REQUIREMENT 20: An EIS system must be capable of containing, processing, reproducing and archiving historical time series of KPI scores, along with its validations; REQUIREMENT 21: An EIS system shall be capable of containing, handling and archiving pointers to root cause analysis objects, to support failure analysis; REQUIREMENT 22: An EIS system shall allow for readjustment of accuracy of KPI’s; REQUIREMENT 23: An EIS system shall be capable of supporting variable domain-based and state-based transaction processing; REQUIREMENT 24: An EIS system shall support triggering and transaction processing cascading on predefined variable hierarchies, to allow for compound recomputation of a series of KPI scores; REQUIREMENT 25: An EIS system shall support exclusive modification (locking mechanisms) and roll-back mechanisms in case or errors; REQUIREMENT 26: An EIS system shall support the validation of predefined variable hierarchies, to allow for validations of compound KPI values; REQUIREMENT 27: Lead indicators require timely refreshment of data it needs for its calculations. Version matching is generally not enough. EIS systems should be capable of working with the refreshment rates required by lead indicators. All data must be time stamped; REQUIREMENT 28: For each lead indicator, an EIS can contain, handle and archive an intervention model. An intervention model in
EIS Systems and Quality Management
this context is a collection of planned preventive or corrective actions needed to restore proper quality performance. An intervention model is a compound structured object capturing information on actors to intervene, when intervention should take place (lead time), the actions to undertake, the (elapsed) time required and the way in which intervention is to be approved. Apart from a scheduled intervention, the EIS shall support the recording of a realized intervention, registering what action has been conducted. Alternately, an EIS may store a service level agreement (SLA), as a model of intervention; REQUIREMENT 29: An EIS system shall be capable of conducting transaction processing, alerting and reporting synchronously with limited latency; (distributed) system latency and human response time need to be taken into account when verifying and validating lead indicator dynamics, just like time zone differences
stEP 5: Verification strategic control Finally, verification and validation is carried out as to whether on-target KPI scores adequately indicate progress towards strategic change accomplishment, and reversely whether off-target KPI scores relate to strategic control under-performance. It is also important to verify if all relevant aspects of strategic change (transition) are covered by the KPI’s. Commonly, the report mechanism up to board level is the BSC on which the KPI’s are collected in a coherent scheme (Figure 3). Deeper analysis can help to verify whether the indications from the designed BSC and their mutual relationships provide a valid image of the current state of organizational performance they reflect. Sensitivity analysis, confidence intervals, scenario play can all be helpful instruments to investigate that. Important for the requirements on EIS systems: REQUIREMENT 30: An EIS system shall be capable of presenting KPI values organized as in the designed BSC;
REQUIREMENT 31: An EIS system shall allow for a documented validation up to strategic control level; REQUIREMENT 32: An EIS can present pointers to root cause analysis objects associated to each KPI to allow verification of the indication reflected by the KPI;
QMs AND EIs systEMs The above listed requirements partly overlap. Furthermore, they have to be evaluated against EIS technological characteristics as we understand them today. This will be done next.
Data types required Evaluating the above requirements, the following data types shall be supported by the EIS: Documents (unstructured data types) may be contained: • • • •
In original format by a Document type; In restricted format (e.g., PDF) by a Document type or in a BLOB; Scanned, in image format by a Document type or in a BLOB; Or in the form of a hyperlink to an external document;
Typical EIS systems are expected to support these types (except Document type), both Table 4. QMS-required data types. Structured data types
Unstructured data types
• Regular types R-DBMS (int, String…), or: • Standard object types in OODB • Date Time type • Actor type • Transaction • Release/Version • Flag data type (validated, archived, …)
• Document type • BLOB data type • Hyperlink data type
937
EIS Systems and Quality Management
systems using relational data storage technology (RDBMS) and object-oriented storage technology (OODBMS).
KPI score Lifecycle A KPI score lifecycle looks as follows; also refer to (Nudurupati et al., 2007): •
•
•
• • • • • • •
data creation ◦ data source management ◦ data measurement equipment management ◦ measured object collection ◦ data measurement and acquisition ◦ data collection and structuring/ packaging ◦ data registration ◦ data transmission or reporting data processing ◦ data intake ◦ data structure validation ◦ data completeness and integrity check ◦ data access rights validation ◦ (singular) data quality control ◦ data analysis and validation ◦ data equivalence estimation and transaction processing assignment data storage ◦ data ownership assignment ◦ data access rights assignment ◦ data release and version assignment ◦ data relationships (description etc.) assignment data statistical quality control data and relationship interpretation data distribution data release and version management data quality control and auditing data archiving data destruction
There are no stages in the above lifecycle that are not supported in some form by a typical EIS
938
system. The first stage (data creation) may take place outside the scope of an EIS system, but data acquisition may be supported. Statistical analysis is another function that may but not necessarily is supported by an EIS system. Measured KPI scores are to be evaluated against target values. The ISO principle of continuous improvement entails a regular sharpening of the target value; an organization should evaluate whether the target can be raised to a higher performance level. Target values are coupled one-to-one to the KPI, also have a history and versions, but typically have a lifecycle and refreshment cycle much longer than that of the KPI itself, typically yearly. Also, access rights and credentials needed to modify target values are completely different from those of KPI scores themselves.
relationships With respect to relationships: optional and mandatory relationships of the type 0-to-1, 1-to-1, 1-to-n and n-to-m should be supported by the EIS. This includes not only relationships among structured types but also among structured and unstructured types and among unstructured types. EIS systems may be expected to comply with this requirement.
Ownership All data and all relationships can be assigned an ownership (type Actor) and access rights. Higher stages of Business Excellence Models, Capability Maturity Models, etc. require that KPI’s and all constituent measurements are documented and managed, for instance by a Custodian, or a Data Manager. Change proposals for measurements and change proposals for EIS systems must be merged and become in fact one and the same.
transaction Processing Apart from the regular transaction processing, the following specific processing features shall
EIS Systems and Quality Management
be offered by the EIS system: • •
• • • • • •
Locked processing, single item or cascading, with roll-back mechanism; Constrained insertion and deletion to enforce mandatory data item and relationships; Archiving data items with all related data; Modify Actors (for instance Ownership) on a group (release, version…) of data; Online validation and online auditing, in a work flow manner; Manage (contain, handle and archive) unstructured data items; Verify the validity of an external reference (hyperlinked object); Manage externally referenced data objects (documents); Preferably:
•
Apart from sorting and searching structured types (according to their value), searching the content of unstructured types is preferred;
Data Volume The EIS system must be capable of storing the KPI scores over a longer period of time. Archived data may reside online or moved to some background storage (tape, vault, WORM …). During the design of the QMS, an estimate can be made with respect to the expected data volume.
Data Distribution Data distribution is typically through a web interface and is not further discussed here.
EIs subsystems Different KPI’s may originate from different EIS subsystems. Preferably, KPI’s are associated with
and also stored in or aggregated from a single EIS subsystem, to maintain optimal system characteristics with respect to: • • •
Flexibility; Modularity; Extendibility;
See Figure 8. Flexibility, modularity and extendibility are important aspects of system integration and system architecture. In the embedding of KPI’s no unnecessary subsystem dependencies are to be introduced. A proper Strategic objectives-CSF’s-Process and KPI’s mapping onto subsystems may help to achieve this: Customer-relationship KPI’s in the CRM subsystem, resource-related KPI’s in the ERP subsystem, etc. For lead KPI’s to function correctly, retrieval, computation, and in fact the availability of the EIS (subsystems) must satisfy the timing requirements as specified. For further details, refer to (Wier, Hunton, & HassabElnaby, 2007).
FUtUrE trENDs Issues, controversies, Problems and Potential solutions Organizations and their role in society changed and will continue to change. A historical overview of enterprises and their industrial, economic and societal position can be taken from (Mokyr, 2005). Edgar, in (Edgar, 2004) identifies four epochs, with quality movement, the previous epoch, merging with the present 21st –century enterprise view epoch. Traditionally, (for profit) organizations sought to raise business revenues and shareholder values through high outputs while minimizing productions costs. Today, product quality and operational excellence are generally no longer sufficient to survive. Firstly, both LSE’s and SME’s are facing globalization forcing them
939
EIS Systems and Quality Management
Figure 8. Flexibility, modularity and extendibility are important aspects of system integration and system architecture. In the embedding of KPI’s no unnecessary subsystem dependencies shall be introduced. A proper Objectives-CSF’s-Process and KPI’s mapping onto subsystems may help to achieve this.
to reconsider the organization of their activities; outsourcing, supplier chain networks, strategic partner alliances, bundled products, alternate delivery channels like online sales and the like, in a more or less global context. In the US in 2005, less than 5% of the retail activities took place on the Internet, in 2010 this figure is expected to go up to some 13% (Johnson, 2005). There is no room to go further into these developments, see for instance (Gerritsen, 2008), also see (Hoogeweegen, van Liere, Vervest, Hagdorn van der Meijden, & de Lepper, 2006) for further details. Secondly, customers, having a global overview over available options, prices and conditions, nowadays find themselves in a strongly developed consumers market, and act conformingly. Moreover, customers came to demand personalization, options, connectivity and additional bundled features through appropriate packaging, commonly grouped under the term mass customization. A third factor is formed by increasingly compelling regulations and compliance demands. A fourth important factor is the increase of financial and fiscal models organizations (including SME’s) can choose from. Organizations develop or reconsider their strategic asset portfolio and changed their views on investments accordingly. Finally, 940
in the twenty first century, organizations cannot operate in splendid isolation: they are (forced to be) aware and confronted with public demands on sustainability, setting forth ethical, societal, environmental expectations to comply with. Organizations are foreseen to transform even more rapidly in the future, to keep up with changing economic and societal demands (Amaravadi Ch, 2003; Ein-Dor, 2003; Fuller, 2003a; Fuller, 2003b; Hassan & McCaffer, 2002; Hoogeweegen et al., 2006; Stevenson, 2000; Tsoukas & Shepherd, 2004; Warhurst, 2005). Contrary to classical economical theory, China managed to show tremendous growth in SME start-ups without having a capitalist regime (Fuller, 2003b). Organizations will further virtualize (participate in complex, geographically dispersed, dynamic and constantly changing alliances), become more open and transparent to gain trust and confidence, and much more agile when it comes to opting-in on new opportunities. Organizations will understand and exploit the fact that they are visible 24/7: branding, imaging, reputation and all other shine factors (Woodhouse, 2004), become dominant, although it is up to the customer to evaluate and acknowledge these qualities. Company culture will radically change: the distance between managers
EIS Systems and Quality Management
and workers will diminish and shared company knowledge will become a central asset. Virtualization will also bring about the need to share formally strictly internal data and information with supply and delivery chain and with partners (Puigjaner et al., 2008). The intellectual property concerns will become more important, compared to today. Sharing knowledge (learning organization) empowers virtual companies and partnerships but new management models are needed to render this process successful. QMS-es tend to integrate with other performance management frameworks, like asset management (see Figure 9). The recent PAS 55, ISO 9001, ISO14001 and other developments are expected to converge into a generic performance-oriented management framework. Lifecycle management will become a leading principle. Product, material and knowledge loops will be closed, which means that for instance car manufacturers take in cars they produced at the end of the lifecycle. Manufacturers will more and more act as an owner, offering the product as an asset to customers.
Branches, collectives, interest groups, etc. will seek to standardize KPI’s and large-scale programmes and databases will emerge on internet (www.kpilibrary.com) to support self- and mutual excellence assessment, ultimately levering the global quality of life (e.g. HEDIS, UN Habitat (Holden, 2006). Emerging examples are the Capability Maturity Model modeling (CMM) and EFQM assessment techniques (European Foundation Quality Management), the South African Excellence Model, all developed along the lines of TQM (Ahmad et al., 2002; Cobbold et al., 2002a; Woodhouse, 2000). The construction industry developed the Construction Best Practice Programme (CBPP) that serves similar purposes (Nudurupati et al., 2007).
cONcLUsION For large as well as medium and small enterprises and other organizations, designing, implementing and using a QMS can naturally and swiftly be integrated in EIS systems using either relational
Figure 9. The embedding of quality management in an asset value-centered asset management framework, after (Woodhouse, 2004). Further development may converge into a generic performance management framework (© 2004, The Woodhouse Partnership Ltd. Used with permission)
941
EIS Systems and Quality Management
or object-oriented storage technology. A number of requirements must be met, however, as discussed in this chapter. Both structured and unstructured data and information shall be accommodated for, as well as historical data, verification, validation and approval information, version and release and extensive roles and access rights models. The design of adequate KPI’s is a key step and interpreting them correctly during operational use, requires thorough cause-effect and accuracy and sensitivity knowledge on the KPI. Reporting of the recorded KPI scores commonly takes place using a balanced scorecard, which groups KPI scores logically, perspective by perspective, so as to express maximum insight and assist maximally in adequate decision making. At various organizational levels up to board level, this evaluation instrument serves to monitor the delivery of performance and quality according to the targets agreed upon. This approach facilitates factual decision making and management control processes. Large, medium and small enterprises and other organizations may implement systems like this to increase customer satisfaction with zero defects whilst saving resources. Databases of more or less best practice KPI’s are starting to appear on internet, realm-by-realm. In the future, more generic forms of performance management are going to be seen, emerging from the merger of present forms, like quality management and asset management. Although ISO 9001 was considered in the above discussion, all that has been said also applies to other quality approaches, like Total Quality Management (TQM), for ISO 9001 extensions like ISO/ TS 16949:2002 for the Automotive industry and largely also for ISO 14001:2004.
rEFErENcEs Adams, C. A., & Frost, G. R. (2008). Integrating sustainability reporting into management practices. Accounting Forum.
942
Ahmad, M. M., & Dhafr, N. (2002). Establishing and improving manufacturing performance measures. Robotics and Computer-integrated Manufacturing, 18, 171–176. doi:10.1016/S07365845(02)00007-8 Amaravadi Ch, S. (2003). The world and business computing in 2051. The Journal of Strategic Information Systems, 12, 373–386. doi:10.1016/j. jsis.2001.11.012 Angerhofer, B. J., & Angelides, M. C. (2006). A model and a performance measurement system for collaborative supply chains. Decision Support Systems, 42, 283–301. doi:10.1016/j.dss.2004.12.005 Bostorff, P., & Rosenbaum, R. (2003). Supply chain excellence; A handbook for dramatic improvement using the SCOR model. New York: AMACOM; American Management Association. Brun, A., Caridi, M., Salama, K. F., & Ravelli, I. (2006). Value and risk assessment of supply chain management improvement projects. International Journal of Production Economics, 99, 186–201. doi:10.1016/j.ijpe.2004.12.016 Bukh, P. N., & Malmi, T. (2005). Re-examining the cause-and-effect principle of the balanced scorecard. In G.Jonsson & J. Mouritsen (Eds.), Accounting in Scnadinavia - Northern Lights (pp. 87-113). Malmo: Liber & Copenhagen Business School Press. Califf, R. M., Gibbons, R. J., Brindis, R. G., & Smith, S. C. (2002). Integrating Quality into the Cycle of Therapeutic Development. Journal of the American College of Cardiology, 40(11), 1895– 1901. doi:10.1016/S0735-1097(02)02537-8 Campbell, S. M., Roland, M. O., & Buetow, S. A. (2000). Defining Quality of Care. Social Science & Medicine, 51, 1611–1625. doi:10.1016/ S0277-9536(00)00057-5
EIS Systems and Quality Management
Cobbold, I., & Lawrie, G. (2002a). Classification of balanced scorecards based on their intended use. PMA Conference. Berkshire, UK: 2GC Ltd.
Fuller, T. (2003b). Small bisness futures in society (Introduction). Futures, 35, 297–304. doi:10.1016/ S0016-3287(02)00082-4
Cobbold, I., & Lawrie, G. (2002b). The development of the balanced scorecard as a strategic management tool. PMA Conference. Berkshire, UK: 2GC Ltd.
Gerritsen, B. H. M. (2008). Advances in Mass customization and adaptive manufacturing. In I. Horvath & Z. Rusak (Eds.), TMCE 2008 (pp. 869-880). Delft, Netherlands: Delft University.
Daniel, R. D. (1961). Management information crisis. Harvard Business Review, 39(Sept-Oct).
Gouscos, D., Kalikakis, M., Legal, M., & Papadopoulou, S. (2007). A general model of performance and quality for one-stop e-Government service offerings. Government Information Quarterly, 24, 860–885. doi:10.1016/j.giq.2006.07.016
Dixon, W. J., & Massey, F. J. (1983). Introduction to statistical analysis. (3rd ed.) New York: McGraw-Hill Book Company. Dolan, P. (2001). Output measures and valuation in health. In M.F.Drummond & A. McGuire (Eds.), Economic evaluation in health care (pp. 46-67). Oxford: Oxford University Press. Drummond, M. F., O’Brien, B., Stoddart, G. L., & Torrance, G. W. (1997). Methods for the economic evaluation of health care programmes (2nd ed.). Oxford: Oxford University Press. Edgar, Th. F. (2004). Control and operations: When does controllability equal profitability? Computers & Chemical Engineering, 29, 41–49. doi:10.1016/j.compchemeng.2004.07.013 Ein-Dor, Ph. (2003). The world and business computing in 2051: from LEO to RUR? The Journal of Strategic Information Systems, 12, 357–371. doi:10.1016/j.jsis.2001.11.011 Forster, N. S., & Rockart, J. F. (1989). Critical success factors: An annotated bibliography (Rep. No. CISR WP No. 191, Sloan WP No. 3041-89). Cambridge, MA: Sloan School of Management, MIT. Fuller, T. (2003a). If you wanted to know the future of small business what questions would you ask? Futures, 35, 305–321. doi:10.1016/ S0016-3287(02)00083-6
Hassan, T. M., & McCaffer, R. (2002). Vision of the large scale engineering construction industry in Europe. Automation in Construction, 11, 421–437. doi:10.1016/S0926-5805(01)00074-7 Hernandez-Matias, J. C., Vizan, A., Perez-Garcia, J., & Rios, J. (2008). An integrated modelling framework to support manufacturing system diagnosis for continuous improvement. Robotics and Computer-integrated Manufacturing, 24, 187–199. doi:10.1016/j.rcim.2006.10.003 Holden, M. (2006). Urban Indicators and the Integrative Ideals of Cities. Cities (London, England), 23(3), 170–183. doi:10.1016/j.cities.2006.03.001 Hoogeweegen, M., van Liere, D. W., Vervest, P. H. M., Hagdorn van der Meijden, L., & de Lepper, I. (2006). Strategizing for mass customization by playing the business networking game. Decision Support Systems, 42, 1402–1412. doi:10.1016/j. dss.2005.11.007 Huang, H.-C. (in press). Designing a knowledgebased system for strategic planning: A balanced scorecard perspective. Expert Systems with Applications. Hubbard, D. W. (2007). How to measure anything; Finding the value of intangibles in business. John Wiley & Sons, Inc.
943
EIS Systems and Quality Management
Hwang, W. T., Tien, W. T., & Shu, C. M. (2007). Building an executive information system for maintenance efficiency in petrochemical plants -- an evaluation. Trans IChemE, Part B . Process Safety and Environmental Protection, 85, 139–146. doi:10.1205/psep06019 International Organization for Standardization ISO. (2008). ISO in figures for the year 2007 Geneva: ISO Central Secretariat. ISO. (2006). [ISO International Organization for Standardization. Retrieved from ttp://www.iso. org]. Survey (London, England), 2006. Johnson, C. (2005). US e-commerce: 2005 to 2010, a five year forecast and analysis of US online retail sales. Forrester Research. Kaplan, R. S., & Norton, D. P. (1992). The balance scorecard - measures that drive performance. Harvard Business Review, 70(1), 71–79. Kaplan, R. S., & Norton, D. P. (1993). Putting the balanced scorecard to work. Harvard Business Review, 71(5), 134–140. Kaplan, R. S., & Norton, D. P. (1996a). The balanced scorecard. Boston, MA: Harvard Business School Press. Kaplan, R. S., & Norton, D. P. (1996b). Using the balance scorecard as a strategic management system. Harvard Business Review, 74(1), 75–85. Kaplan, R. S., & Norton, D. P. (2001a). The strategy-focused organization. Strategy and Leadership, 29(3), 41–43. Kaplan, R. S., & Norton, D. P. (2001a). Transforming the balanced scorecard from performance measurement to strategic management: Part I. Accounting Horizons, 15(1), 87–106. doi:10.2308/ acch.2001.15.1.87
Kaplan, R. S., & Norton, D. P. (2001b). Transforming the balanced scorecard from performance measurement to strategic management: Part II. Accounting Horizons, 15(2), 147–162. doi:10.2308/ acch.2001.15.2.147 Kaplan, R. S., & Norton, D. P. (2004a). How strategy maps frame an organization’s objectives. Financial Executive, 20(2), 40–45. Kaplan, R. S., & Norton, D. P. (2004a). Measuring the strategic readiness of intangible assets. Harvard Business Review, 82(2), 52–63. Kaplan, R. S., & Norton, D. P. (2004b). Strategy maps: Converting intangible assets into outcomes. Boston, MA: Harvard Business School Press. Kaplan, R. S., & Norton, D. P. (2004c). Strategy Maps; Converting Intangible Assets into Tangible Outcomes. Boston: Harvard Business School Press. Kaplan, R. S., & Norton, D. P. (2004c). The strategy map: Guide to aligning intangible assets. Strategy and Leadership, 32(5), 10–17. doi:10.1108/10878570410699825 Kaplan, R. S., & Norton, D. P. (2006). Alignment: Using the balanced scorecard to create corporate synergies. Boston, MA: Harvard Business School Press. Kim, H.-S., & Kim, Y.-G. (2008). A CRM performance measurement framework: its development process and application. Industrial Marketing Management. Mather, D. (2003). CMMS: A timesaving implementation process. Boca Raton, FL: CRC Press. Mokyr, J. (2005). The gifts of Athena; Historical origins of the knowledge economy. Princeton, NJ: Princeton University Press. Niven, P. R. (2005). Balanced scorecard diagnostics; Maintaining maximum performance. Hoboken, NJ: John Wiley & Sons, Inc.
944
EIS Systems and Quality Management
Nudurupati, S., Arshad, T., & Turner, T. (2007). Performance measurement in the construction industry: An action case investigating manufacturing methodologies. Computers in Industry, 58, 667–676. doi:10.1016/j.compind.2007.05.005 Puigjaner, L., & Guillen-Gosalbez, G. (2008). Towards an integrated framework for supply chain management in the batch chemical process industry. Computers & Chemical Engineering, 32, 650– 670. doi:10.1016/j.compchemeng.2007.02.004 Rockart, J. F. (1986). A primer on critical success factors. In C.V.Bullen (Ed.), The rise of managerial computing: The best of the center for Information Systems research (pp. 383-423). Cambridge, MA: Sloan School of Management, MIT. Schultink, G. (2000). Critical environmental indicators: Performance indices and assessment methods for sustainable rural development planning. Ecological Modelling, 130, 47–58. doi:10.1016/ S0304-3800(00)00212-X Stevenson, T. (2000). Will our futures look different, now? Futures, 32, 91–102. doi:10.1016/ S0016-3287(99)00069-5 Tsoukas, H., & Shepherd, J. (2004). Coping with the future: developing organizational foresightfulness (Introduction). Futures, 36, 137–144. doi:10.1016/S0016-3287(03)00146-0
Ugwu, O. O., & Haupt, T. C. (2007). Key performance indicators and assessment methods for infrastructure sustainability -- a South African construction industry perspective. Building and Environment, 42, 665–680. doi:10.1016/j.buildenv.2005.10.018 Van den Eynde, J., Veno, A., & Hart, A. (2003). They look good but don’t work: a case study of global performance indicators in crime prevention. Evaluation and Program Planning, 26, 237–248. doi:10.1016/S0149-7189(03)00028-4 Warhurst, A. (2005). Future roles of business in society: The expanding boundaries of corporate responsibility and a compelling case for partnership. Futures, 37, 151–168. doi:10.1016/j. futures.2004.03.033 Wier, B., Hunton, J., & HassabElnaby, H. R. (2007). Enterprise resource planning systems and non-financial performance incentives: The joint impact on corporate performance. Int.J.of Accounting Information Systems, 8, 165-190. Woodhouse, J. (2000). Key performance indicators. Retrieved from http://www.TWPL.com Woodhouse, J. (2004). Closing the loop: sustainable implementations of improvements. In ERTC Reliability & Asset Management Conference; Oil, Gas, Petrochem & Power Industries.
This work was previously published in Enterprise Information Systems for Business Integration in SMEs: Technological, Organizational, and Social Dimensions, edited by Maria Manuela Cruz-Cunha, pp. 300-325, copyright 2010 by Business Science Reference (an imprint of IGI Global).
945
946
Chapter 4.3
A Procedure Model for a SOABased Integration of Enterprise Systems Anne Lämmer sd&m AG, Germany Sandy Eggert University of Potsdam, Germany Norbert Gronau University of Potsdam, Germany
AbstrAct
INtrODUctION
Enterprise systems are being transferred into a service-oriented architecture. In this article we present a procedure for the integration of enterprise systems. The procedure model starts with decomposition into Web services. This is followed by mapping redundant functions and assigning of the original source code to the Web services, which are orchestrated in the final step. Finally an example is given how to integrate an Enterprise Resource Planning System and an Enterprise Content Management System using the proposed procedure model.
Enterprise resource planning systems (ERP systems) are enterprise information systems designed to support business processes. They partially or completely include functions such as order processing, purchasing, production scheduling, dispatching, financial accounting and controlling (Stahlknecht & Hasenkamp, 2002). ERP systems are the backbone of information management in many industrial and commercial enterprises and focus on the management of master and transaction data (Kalakota & Robinson, 2001). Besides ERP systems, enterprise content management
Copyright © 2011, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Procedure Model for a SOA-Based Integration of Enterprise Systems
systems (ECM systems) have also developed into companywide application systems over the last few years. ECM solutions focus on indexing all information within an enterprise (Müller, 2003). They cover the processes of enterprise-wide content collection, creation, editing, managing, dispensing and use, in order to improve enterprise and cooperation processes (Koop, Jäckel, & van Offern, 2001; Kutsch, 2005). In order to manage information independently, ECM combines technologies such as document management, digital archiving, content management, workflow management and so forth. The use of ECM systems is constantly on the rise (Zöller, 2005). This leads to an increasing motivation for enterprises to integrate the ECM systems within the existing ERP systems, especially when considering growing international competition. The need for integration is also eminently based on economical aspects, such as the expense factor in system run time (Schönherr, 2005). For a cross-system improvement of business processes, enterprise systems have to be integrated.
rELAtED WOrK service Oriented Architecture as an Integration Approach A number of integration approaches and concepts already exist. They can be differentiated by integration level (for example data, functions or process integration) and integration architecture (for example point-to-point, hub & spoke, SOA) (Schönherr, 2005). This article presents an approach to integrating enterprise systems by way of building up service-oriented architectures. This integration approach is of special interest and will be described in more detail. The concept of service orientation is currently being intensively discussed. It can be differentiated from component orientation by its composition and index service (repository). Additionally,
SOA is suitable for a process oriented, distributed integration (Schönherr, 2005). However, the addressed goals of component orientation and SOA are similar: different enterprise systems are connected through one interface, and a cross-system data transfer and the reusage of objects or components is enabled. Thereby a service represents a well-defined function which is generated in reaction to an electronic request (Burbeck, 2000). The SOA approach offers a relatively easy way to connect, add and exchange single services, which highly simplifies the integration of similar systems (e.g., enterprise take-over). Moreover, SOA offers a high degree of interoperability and modularity (Behrmann & Benz, 2005), which increases the adaptability of enterprise systems (Gronau et al., 2006). The SOA approach is based on the concept of service. The sender wants to use a service and in doing so the sender wants to achieve a specific result. Thereby the sender is not interested in how the request is processed or which further requests are necessary. This is the idea of SOA, where services are defined in a specific language and referenced in a service index. Service request and data exchange occur via use of predefined protocols (Dostal, Jeckle, Melzer, & Zengler, 2005; Küster, 2003). This service orientation can be used on different levels of architecture. The grid architecture is a common example of infrastructure level (Bermann, Fox, & Hey, 2003; Bry, Nagel, & Schroeder, 2004). On the application level an implementation usually takes place in terms of Web services. The use of Web services offers the possibility of reusing raw source code, which is merely transferred to another environment (Sneed, 2006). The benefit of this transfer is the re-usage of perfected (old) algorithms. The main disadvantage is the necessity of revising the raw source code in order to find possible dependencies (Sneed, 2006). This is also true for enterprise systems. It is not efficient to reuse the entire old system, but
947
A Procedure Model for a SOA-Based Integration of Enterprise Systems
rather only significant parts of it. To accomplish this it is necessary to deconstruct the old enterprise system and to locate the source code parts which can effectively be reused. Our approach uses self-diagnosis for finding these source code locations. This analysis will be considered in the third integration step.
self-Diagnosis As just described, our approach uses self-diagnosis for location of useful source code. For this, the method of self-diagnosis will be presented and the differences to other approaches will be shown. Some approaches for transformation of legacysystems into a SOA exist already. However, these approaches see the whole system as one service. The system gets a service description for using this service in a SOA. Our approach differs in that it deconstructs the system for a tailored need. For this, the method of self-diagnosis is used. Self-diagnosis can be defined as a system’s capacity to assign a specific diagnosis to a detected symptom. The detection of symptoms and assignment are performed by the system itself without any outside influence (Latif-Shabgahi, Bass, & Bennett, 1999). The mechanism of self-diagnosis has been detected surveying natural systems; it can partly be applied to artificial systems as well. The first step of self-diagnosis is the detection of symptoms. Usually the detection of one existing symptom is not sufficient to make an indisputable diagnosis. In this case, more information and data have to be gathered. This can be described as symptom collection. In a second step the symptoms are assigned to a specific diagnosis. Depending on the diagnosis, corresponding measures can be taken (Horling, Benyo, & Lesser, 2001). Symptoms are a very abstract part of selfdiagnosis. These symptoms can be a high network load in distributed systems, missing signals, or buffer overload of the hardware layer. For enterprise systems the symptoms can be the frequency of usage of user interface elements by the user
948
or dependencies of code parts or components. Other types of symptoms are possible. In general, the answer to questions concerning the measure of interesting items provides hints for possible symptoms. Self-diagnosis can be categorized by the symptom acquisition method. Active and passive self-diagnosis must also be distinguished. In this context, the program or source code is the crucial factor for a division between active and passive self-diagnosis. A fundamental basis for either alternative is an observer or monitor. Using passive self-diagnosis, the monitor detects and collects symptoms and information. It can either be activated automatically or manually (Gronau et al., 2006). If you know which items need to be observed and the point where this information can be gathered, you only have to monitor this point. This is what passive selfdiagnosis does. For example: if you want to know how often a button is pressed, you have to find where the button-event is implemented in the code and observe this button-event. In active self-diagnosis, the program’s function or modules are the active elements. They send defined information to the monitor and act independently if necessary. The monitor is used as receiver and interprets the gathered information and symptoms. The main advantage of active self-diagnosis is the possibility of detecting new symptoms, even if no clear diagnosis can be made before the problems become acute and are forwarded to other systems. In contrast, using passive self-diagnosis, the monitor can only inquire about specific data. In this case, a response or further examination is only possible if the problem is already known. For example: if you do not know the location of all the buttons and or the code component for the button-event, you will have to recognize all events with their initial point and filter them with the monitor. The monitor does not have to know how many buttons exist or where their code is located, but the buttons have
A Procedure Model for a SOA-Based Integration of Enterprise Systems
to “know” to register with the monitor. These are the requirements of active self-diagnosis. The assembly of diagnosis points depends on the application context and the software system. The required time and effort cannot be specified; it depends on the design and implementation of the software system. Self-diagnosis can also be employed for the examination of source-code usage and interdependences. Depending on the desired information, different points of diagnosis have to be integrated into the source-code. Different points of diagnosis have to be determined in order to allow for the allocation of code parts to various fields and functions. Therefore context, programming language, and software system architecture must be considered. Our approach uses this method to locate code parts that can be collected into components. As we will demonstrate later in this article, we need to locate functions and enterprise systems business objects. This method can be used for the detection of code parts which are possible services. Diagnosis points must thereby be integrated into the system source code, and software dependencies analyzed. As we discussed earlier in this article, the main challenges in integration of legacy enterprise systems like ERP and ECM are, first, the deconstruction and second, the allocation of code. To address these challenges, we have developed a procedure model which will be described next.
PrOcEDUrE MODEL In the following, a procedure model that integrates general application systems within a company is presented. The procedure model begins with the deconstruction of systems into Web services. This is followed by a mapping of redundant functions and the assignment of original source code to Web
services, which is orchestrated in the last step. The process includes taking the old ERP system; deconstructing it into different abstraction levels such as functional entities or business objects, searching for redundant entities, allocating the code fragments dependent on these functional entities and encapsulating them. This results in many independent functional entities, which can be described as a service. They have different abstraction levels and have to compose and orchestrate with, for example BPEL-WS. This composition and orchestration is the way of integration.
Deconstruction of systems First, the systems which are to be integrated are deconstructed into services. The challenge of this step depends on the number of particular services, which could span the range from one single service per system, up to a definition of every single method or function within a system as a service. In the case of a very broad definition, the advantages, such as easy maintenance and reuse and so forth, will be lost. In case of a very narrow definition, disadvantages concerning performance and orchestration develop; the configuration and interdependencies of the services become too complex. This article proposes a hierarchical approach which describes services of different granular qualities on three hierarchical levels. Areas of function of a system are described as the first of these levels (Figure 1, Part 1). For example, an area of functions could include purchase or sales in the case of an ERP system or, in the case of ECM systems, archiving or content management. An area of function can be determined on the abstract level by posing questions about the general “assigned task” of the system. The differences between the three hierarchical levels can be discovered by answering the following questions:
949
A Procedure Model for a SOA-Based Integration of Enterprise Systems
Figure 1. Procedure model for the integration of application systems 1
Task based decomposition of systems in three hierarchical levels
Question 1: What are the tasks of the particular system? Result : The services on the first level which constitute the basic task. Question 2: Which functionality derives from every task? Result: The services on the second level which are contributed by the different functions Question 3: Which business objects are utilised by both systems? Result: Number of business objects which will be used as basic objects in both systems, e.g. article data, customer data or index data .
2
Preparation of the integration and mapping of redundant functions Question 1: Which tasks, functions and basic functions appear more than once? Result: List of possible redundant functions
yes
Question 2: Are they redundant , i.e. superfluous , or do they provide different services? Question 3: Can they be combined by an appropriate programming ?
3
Detection and assignment of services to code fragements
Step 1: Definition of concepts for the points of diagnosis depending on the systems and interesting information about the source code Step 2: Programming and integrating of the markers
Step 3: Analysing the collected data Step 4: Reengineering of redundant services depending on the answer to question 3 of part 2
4
Orchestration of web services Step 1: Selection of a description language for web services (e.g. WSBPEL) Step 2: Wrapping of the original source code into a web service Step 3: Modelling of the business process which is important for the integrating systems
950
yes
A Procedure Model for a SOA-Based Integration of Enterprise Systems
1.
2.
3.
Question: What are the tasks of the particular system? The answers resulting from this step correspond to services on the first level, which constitute the general task—for example sales, purchasing, inventory management or workflow management, archiving or content management. These tasks are abstract and describe main functionalities. They consist of many other functions which are the objects of the next level. Question: Which functionality derives from every single task? The answers to this question correspond to the services on the second level that are contributed by the various functions. These functions are more detailed than the general tasks. They describe what the tasks consist of and what they do—for example, calculate the delivery time, identify a major customer, or constitute check-in and e-mail functionalities. For these functions the application needs data, which can found in the third level. Question: Which business objects are utilized by both systems? This answer consists of the number of business objects that will be used as basic objects in both systems, for example article data, customer data or index data.
In this procedure model, all possible levels of service deconstruction are addressed; yet the realization on all hierarchical levels constitutes an individual task. The result of this step is a 3-stage model displaying the services of an application. The datalevel, that is the integration of databases, is not further examined at this point since it is not an integral part of our model, the aim of which is to wrap functions as Web services without altering them or the original source code. The data level is not touched by this process.
Preparation and Mapping The main advantage of Web service architecture is the high degree of possible reuse. By division into three hierarchical levels, a detection of similar functions is made possible, especially on the level of functionality and business objects. In some cases an adjustment of the functions is necessary in order to serve different contexts of use. Therefore, the next step consists of integration on different levels and the mapping of identical functions (Figure 1, Part 2). This step poses the following questions: 1.
2.
3.
Question: Which tasks, functions and business objects appear more than once? For example, most applications contain search functions. Some applications have functions for check in and check out. ERP systems calculate the time for many things with the same algorithm under different names. Question: Are these multiple functions and objects-redundant, that is superfluous, or do they provide different services? Some functions may have the same name but perform different tasks. Question: Can these multiple functions and objects be combined by way of appropriate programming? For the functions ascertained in Question 2 to be similar functions with different names the possibility of integrating them into one has to be analyzed.
The advantage of this mapping is the detection of identical functions, which may by only named differently while completing the same task. In doing so, the benefit of reuse can be exploited to a high degree. Additionally, this part of the survey allows for a minimization of programming, due to the encapsulation of multiple functions. Only those functions which share a high number of similarities, but nevertheless complete different tasks, have to be programmed differently; they can be merged by reprogramming.
951
A Procedure Model for a SOA-Based Integration of Enterprise Systems
It is important to note that for this part the deconstruction consists of an abstract level and is in the functional view. In the following step, this will change from a functional view to a code view.
language, making possible the reuse of source code in service oriented architecture. If redundant services have been detected in Part 2, which need to be reengineered, then the reengineering happens now.
Detection and Assignment of services to code Fragments
Orchestration of Web services
The next step brings the biggest challenge, namely the transformation of existing applications into service oriented architecture. Until now, services have been identified by their tasks, but the correlation to existing source code still needs to be done. This is going to be accomplished in the next step (Figure 1, Part 3). Self-diagnosis is used at this point to integrate earlier defined points of diagnosis into the source code. These points of diagnosis actively collect usage data and facilitate conclusions concerning the fields and functions via their structure. The structure of the points of diagnosis depends on the context of their application and on the software system. It is not possible to describe the complexity of the process, which also depends on the structure and programming of the software systems. As we discussed earlier in Section 2.2, the points of diagnosis depend on what needs to be observed. Here we want to know which code fragments share correspondences and execute the identified functions in the functional view. From this follows the necessity of a monitor. For example, the points can be every method call in the source code of an ERP system. If the user calls a function, the points of diagnosis have to inform the monitor that they were called. The monitor has to recognize and to analyze which method calls belong together. Now the code fragments are analyzed and assigned to the functions identified in Part 1, and the wrapping of code fragments into Web services can be started. This step necessitates the usage of the existing source code and the description of relevant parts with a Web service description
952
The results of stage 3 are the described Web services. These have to be connected with each other depending on the business process. This orchestration takes place in several steps (Figure 1, Part 4). First, the context must be defined; second, the service description language has to be selected; and third, the Web services need to be combined. A four-stage procedure model for a serviceoriented integration of application systems has just been described. This process holds the advantages of a step-by-step transformation. The amount of time needed for this realization is considerably higher than in a “big bang” transformation, however, a “big bang” transformation holds a higher risk and therefore requires high-quality preparation measures. For this reason, a “big bang” transformation is dismissed in favor of a step-by-step transformation. There is yet another important advantage in the integration or deconstruction of application systems into services when carried out in several steps. First, a basic structure is built (construction of a repository, etc.). Next, a granular decomposition into Web services occurs on the first level, thereby realizing a basic transformation of a service oriented concept. Following this, Web services of the second and third hierarchical level can be integrated step-by-step. This reduction into services provides high quality integration. The procedure model we just presented is very abstract. Therefore, a practical example for two enterprise systems, ERP and ECM, will be given in Part 4.
A Procedure Model for a SOA-Based Integration of Enterprise Systems
EXAMPLE OF APPLyING tHE PrOcEDUrE MODEL It is necessary to develop a general usage approach and to test it on ERP and ECM systems since no concrete scenario of these technologies in regard to praxis as of yet exists (Issing, 2003). The aim of this example of use is to describe the integration of both company-wide systems, ERP and ECM, using our presented approach. In what follows, we present a case study of a situation of integration of two systems: an ERP and an ECM system. A German manufacturer of engines and devices administer a complex IT landscape. This IT landscape includes, among others, two big enterprise systems. One of them is the ERP system “Microsoft Dynamics NAV” and the other is the ECM system “OS.5|ECM” of Optimal Systems. The ERP System includes modules such as purchasing, sales and inventory management. The ECM system consists of modules such as document management, archiving and workflow management. In the current situation a bidirectional interface between both systems exists. One example of a business process in which both systems are used is the processing of
incoming mails and documents. In order to scan and save the incoming invoices of suppliers, the module of the ECM System “document management” is used. The access to the invoice is made possible through the ERP system. In the future, a SOA-based integration of both enterprise systems can be reasonably expected under the aspect of business process improvement. Referring to the example mentioned above, the “portal management” component could be used to access, search, and check-in all incoming documents. What follows is a description, in four parts, of the integration based on the procedure model we presented in Part 3.
Part 1: segmentation of ErP and EcM systems into services According to the procedure model (Figure 1), the individual systems will be separated into independent software objects, which in each case complete specified functions or constitute business objects. The segmentation is structured in three bottom-up steps (Figure 2). Identification is based on the answers to questions concerning main tasks of specific systems.
Figure 2. Segmentation of services Basic Functions ERP (selection)
Basic Functions ECM (selection)
purchase
sales
masta data management
content management
archiving
workflow management
article management
repository management
inventory management
document management
collaboration
portal management
Areas of Functions (selection) check in
identify delivery time
Areas of Functions (selection) check out
check in
email connection
create index te rms
index data
prices
master data
business partner
Business objects (selection) customer
articles
953
A Procedure Model for a SOA-Based Integration of Enterprise Systems
The basic functions of an ERP system are purchasing, sales, master data management, inventory management und repository management. Document management, content management, records management, workflow management and portal management are basic functions of ECM systems. Subsequently, the areas of functions are disaggregated into separate tasks. Business objects are classified such as the business object “article” or “customer”. Thus, segmentation in areas of functions, tasks of functions and business objects is achieved and a basis for the reusage of services is created.
Part 2: Preparation of Integration/ Mapping The results of the first step of the segmentation are separation of services of differentiated granularity per system. According to the procedure model, the mapping on the different areas will be arranged in the second step. For that purpose, the potential services described will be examined for similarities. On every level of hierarchy, the functional descriptions (answers to questions in Part 1) of services are checked and compared with each other. If functions or tasks are similar, they will have to be checked for possibility of combination and be slotted for later reprogramming. One example of such similarity between functions is “create index terms”. Most enterprise systems include the function “create index terms” for documents
such as invoices or new articles. The estimation of analogy of different functions, particularly in enterprise systems where implementation is different, lies in the expertise of the developer. Another example is the service “check in/check out”. This service is a basic function of both ERP and ECM systems and is now to be examined for possible redundancy. After determining that the services “check in” or “check out” are equal, the service will be registered as a basic function only once. Services which are not equal but related will be checked in another step and either unified with suitable programming or, if possible, spilt into different services. The results of this step are the classification of services from ERP and ECM systems into similar areas and the separation of redundant services. The following table shows examples of separated services. By this separation of both enterprise systems, a higher degree of re-usage and improved complexity handling of these systems is achieved. For the application of services, a service-oriented architecture (SOA) which defines the different roles of participants is now required (Burbeck, 2000).
Part 3: Detection and Assignment of services to code Fragments As already described in the general introduction, the identification of functions to be segmented in the source code constitutes one of the biggest challenges in a transfer to service-oriented archi-
Table 1. Examples of separate services of ERP and ECM systems Separate services
Basic Functions
Areas of Functions
954
ERP
ECM
Purchase
Content management
Sales
Archiving
Article management
Document management
Repository management
Workflow management
Check in
E-mail connection
Identify delivery time
Save document
Check out
Create index terms
A Procedure Model for a SOA-Based Integration of Enterprise Systems
tecture. As part of this approach, the method of self-diagnosis is suggested. Appropriate points of diagnosis will be linked to the source code in order to draw conclusions from used functions to associated class, method or function in the original source code. Through the use of aspect oriented programming, aspects can be programmed and linked to the classes and methods of the application system. Necessary data, such as the name of the accessed method, can be collected by accessing the respective classes and methods (Vanderperren, Suvée, Verheecke, Cibrán, & Jonckers, 2005). Based on a defined service, “order transaction”, all the names of methods which are necessary for the execution of “order transaction” must be identified. To wrap the service “order transaction”, for example to combine it with a Web service description language, the original methods need be searched for and encapsulated. Additionally, the reprogramming of redundant functions is part of the phase of identification and isolation of services. This, as well, is only possible if the original methods are identified.
Part 4: Orchestration of Web services The last integration phase is used to compile Web services. The previous steps had to be completed in preparation for the procedure model. The Web services now are completely described and have a URI to be accessed. Now, only the composition and the chronology of requests of the specific Web services are missing. For the orchestration the Web service business process execution language (WS-BPEL) is recommended. The WS-BPEL was developed by the OASIS-Group and is currently in the process of standardization (Cover, 2005). If the Web services present a function with a business process, the WS-BPEL is particularly suitable for orchestration of Web services (Lübke, Lüecke, Schneider, & Gómez, 2006). Essentially, BPEL is a language to compose (Leymann & Roller, 2000) new Web services from existing
Web services with help of workflow technologies (Leymann, 2003). In BPEL, a process is defined which is started by a workflow system in order to start a business process. Web services are addressed via a graphical representation with a modelling imagery of WSBPEL. The business process is modelled independently from the original enterprise systems. Since in the first integration step, the systems were separated by their tasks and functions, now all of the functions are available for the business process as well.
cONcLUsION The procedure model for the integration of application systems as it has been presented in this paper is an approach that has been successfully deployed in one case. Currently the assignment ability and the universality are being tested. The self-diagnosis, that is the assignment of source code to services via aspect oriented programming, constitutes a bigger challenge. A verification of costs and benefits cannot be given sufficiently; however, several examples show convincing results and suggest a general transferability. The complexity in such a realization cannot be specified. Particularly for bigger and complex systems, the cost-to-benefit ratio has to be verified. Despite this, it must be recognized that the assignment of code fragments to functions is not an easy task. If one observes every method call a high number of calls must be analyzed. Visualization can be helpful for analyzing, since method calls belonging together will build a cluster in the emerging network. The observation of method calls is possibly not the most optimal way for very complex systems. If the functional view of services in Part 1 is not part of the business object layer, but only of the general task layer, one can reduce the numbers of diagnosis points. The possibilities depend on the programming language and their constructs.
955
A Procedure Model for a SOA-Based Integration of Enterprise Systems
Finally, the approach presented above describes a procedure model for service-oriented integration of different application systems. The integration proceeds using Web services which thereby improve the integration ability, interoperability, flexibility and sustainability. The reusable Web services facilitate the extraction of several functions and combination of these into a new service. This allows for reuse of several software components. Altogether, Web services improve the adaptability of software systems to the business processes and increase efficiency (Hofman, 2003). To give an example of the realization of the procedure model, an integration of an ERP- and ECM-system was chosen. The reasons for this choice consist in targeted improvement of business aspects and increasing complexity of both application systems. Dealing with this complexity makes integration necessary. Through mapping, redundant functions can be detected and as a consequence, a reduction of the complexity is made possible. Regarding the adaptability and flexibility of affected application systems, Web services are a suitable approach for integration. In particular, it is the reuse of services and an adaptable infrastructure which facilitate the integration. In addition to all of this, we expect to discover additional advantages concerning maintenance and administration of affected application systems.
rEFErENcEs
Bry, F., Nagel, W., & Schroeder, M. (2004). Grid computing. Informatik Spektrum, 27(6), 542-545. Burbeck, S., (2000). The Tao of e-business services. IBM Corporation. Retrieved January 12, 2008, from http://www.ibm.com/software/ developer/library/ws-tao/index.html Cover, R. (2004). Web standards for business process modelling, collaboration, and choreography. Retrieved January 12, 2008, from http:// xml.coverpages.org/bpm.html Dostal, W., Jeckle, M., Melzer, I., & Zengler, B. (2005). Service-orientierte architekturen mit Web services [Service oriented architectures with web services]. Spektrum Akademischer Verlag. Gronau, N., Lämmer, A., & Andresen, K. (2006). Entwicklung wandlungsfähiger auftragsabwicklungssysteme [Development of adaptable enterprise systems]. In N. Gronau, A. Lämmer (Eds.), Wandlungsfähige ERP-Systeme (37-56). Gito Verlag. Gronau, N., Lämmer, A., & Müller, C. (2006). Selbstorganisierte dienstleistungsnetzwerke im maschinen- und anlagenbau [Self organized service networks at engineering]. IndustrieManagement, 2, 9-12. Hofmann, O. (2003). Web-services in serviceorientierten IT-Architekturkonzepten [Web services in service oriented concepts of IT architecture]. In H.-P. Fröschle (Ed.), Web-services. Praxis der Wirtschaftinformatik, HMD 234, dpunkt Verlag, Wiesbaden 2003, S.27-33.
Behrmann, T., & Benz, T. (2005). Serviceoriented-architecture-ERP. In T. Benz (Ed.), Forschungsbericht 2005 Interdisziplinäres Institut für intelligente Geschäftsprozesse.
Horling, B., Benyo, B., & Lesser, V. (2001) Using self-diagnosis to adapt organizational structures. Computer Science Technical Report TR-99-64, University of Massachusetts.
Berman, F., Fox, G., & Hey, T. (2003). Grid computing. Making the global infrastrucure a reality. Wiley.
Issing, F. (2003). Die softwarestrategie für webservices und unternehmensinfrastrukturen der firma Sun Microsystems [Software strategy for web services and enterprise infrastructures
956
A Procedure Model for a SOA-Based Integration of Enterprise Systems
of Sun Microsystems]. In H.-P. Fröschle (Ed.), Web-Services (pp. 17-26). Praxis der Wirtschaftinformatik, HMD 234, dpunkt Verlag. Kalakota, R., & Robinson, M. (2002). Praxishandbuch des e-business [Practice e-business]. Book of practice financial times. Prentice Hall, 317ff. Koop, H. J., Jäckel, K. K., & van Offern, A. L. (2001). Erfolgsfaktor content management—Vom Web-content bis zum knowledge management [Success factor enterprise content management— from web-content to knowledge management]. Vieweg:Verlag. Küster, M. W. (2003). Web-services—Versprechen und realität [Web services—promises and reality]. In H.-P. Fröschle (Ed.), Web-services (pp. 5-15). Praxis der Wirtschaftinformatik, HMD 234, dpunkt Verlag. Kutsch, O. (2005). Enterprise-content-management bei finanzdienstleistern—Integration in strategien, prozesse und systeme [Enterprise content management at financial service provider— Integration at strategies, processes and systems]. Deutscher Universitäts: Verlag. Kuropka, D., Bog, A., & Weske, M. (2006). Semantic enterprise services platform: Motivation, potential, functionality and application scenarios. In Proceedings of the 10th IEEE International EDOC Enterprise Computing Conference (pp. 253-261). Hong Kong. Leymann, F., & Roller, D. (2000). Production workflow—Concepts and techniques. Prentice Hall International.
tributed Processingm, PDP’99 IEEE Computer Society (pp. 97-104). Lübke, D., Lüecke, T., Schneider, K., & Gómez, J. M. (2006). Using event-driven process chains of model-driven development of business applications. In F. Lehner, H. Nösekabel, & P. Kleinschmidt (Eds.), Multikonferenz wirtschaftsinformatik 2006 (pp. 265-279). GITO-Verlag. Müller, D. (2003). Was ist enterprise-contentmanagement? [What is enterprise content management?] Retrieved January 14, 2008, from http://www.zdnet.de/ itmanager/strategie/0,39023331,2138476,00.htm Schönherr, M. (2005). Enterprise applikation integration (EAI) und middleware, grundlagen, architekturen und auswahlkriterien [Enterprise application integration (EAI) and middleware, fundamentals, architectures and criteria of choice]. ERP Management, 1, 25-29. Scheckenbach, R. (1997). Semantische geschäftsprozessintegration [Semantic integration of business processes]. Deutscher Universitäts:Verlag. Stahlknecht, P., & Hasenkamp, U. (2002). Einführung in die wirtschaftsinformatik [Introduction to business computing] (10th ed). Springer Verlag. Sneed, H. M. (2006). Reengineering von legacy programmen für die wiederverwendung als web services [Reengineering of legacy software for reuse as web services]. In Proceedings zum Workshop Software-Reengineering und Services der Multikonferenz Wirtschaftsinformatik.
Leymann, F. (2003). Choreography: Geschäftsprozesses mit web services [Choreography: Business processes with web services]. OBJECTspektrum, 6, 52-59.
Vanderperren, W., Suvée, D., Verheecke, B., Cibrán, M. A., & Jonckers, V. (2005). Adaptive programming in JAsCo. In Proceedings of the 4th International Conference on Aspect-Oriented Software Development. ACM Press.
Latif-Shabgahi, G., Bass, J. M., & Bennett, S. (1999). Integrating selected fault masking and self-diagnosis mechanisms. In Proceedings of the 7th Euromicro Workshop on Parallel and Dis-
Zöller, B. (2005). Vom archiv zum enterprise content management [From archive to enterprise content Management]. ERP Management, 4, 38-40.
This work was previously published in Always-On Enterprise Information Systems for Business Continuance: Technologies for Reliable and Scalable Operations, edited by Nijaz Bajgoric, pp. 265-276, copyright 2010 by Information Science Reference (an imprint of IGI Global). 957
958
Chapter 4.4
Size Matters!
Enterprise System Success in Medium and Large Organizations Darshana Sedera Queensland University of Technology, Australia
AbstrAct
INtrODUctION
Organizations invest substantial resources in acquiring Enterprise Systems, presumably expecting positive impacts to the organization and its functions. Despite the optimistic motives, some Enterprise System projects have reported nil or detrimental impacts. This chapter explores the proposition that the size of the organization (e.g. medium, large) may contribute to the differences in benefits received. The alleged differences in organizational performance are empirically measured using a prior validated model, using four dimensions employing data gathered from 310 respondents representing 27 organizations.
Enterprise System (ES) is an ideology of planning and managing the resources of an entire organization in an efficient, productive, and profitable manner, and is manifested in the form of configurable information system packages (Laukkanen, Sarpola et al. 2007). The Enterprise System vendors promote a fully integrated core business processes through the organization where seamless integration of the information flowing from one functional area to the other.Amongst the myriad of benefits, Enterprise Systems said to deliver key benefits like: cost reduction, productivity improvement, quality improvement, customer service improvement, better resource management, improved decision-making and planning, and organizational empowerment (Shang and Seddon 2002).
DOI: 10.4018/978-1-59904-859-8.ch016
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Size Matters!
Organizations devote substantial resources and time on acquiring an Enterprise System, presumably expecting positive impacts to the organization and its functions. These extensive ES implementations are typically measured in millions of dollars Pan et al (2001), and for many organizations they represent the largest single IT investment. The substantial resource requirements for Enterprise Systems have restricted Enterprise System market to the Medium-Large organizations with many suggesting that ES are best suited for Large Corporations (Hillegersberg and Kumar 2000). Recent changes in market place, wherein the demand for Enterprise Systems from large organizations has plateau, vendors are attempting to shift their emphasis into the Small-Medium Enterprises (SMEs) with scaled-down ES products (Piturro 1999; Everdingen, Hillegersberg et al. 2000). Measuring the impacts of Enterprise Systems takes on special importance since the costs and risks of these large technology investments rival their potential payoffs. Often carefully rationalized in advance, ES investments are too seldom systematically evaluated post-implementation (Thatcher and Oliver 2001). Welsh and White (1981) differentiated the small and large organizations using such aspects like time, skills, and resources – where the medium organizations lacking all three compared to their counterparts. D’Amboise and Muldowney (1988) argue that the lack of resources has made smaller organizations more vulnerable to the environmental effects and misjudgments forcing them to allocate more time to adjusting to, rather than devoting time on predicting and controlling. The resource lack of constraints has been found to hinder IT adoption (Baker 1987; Cragg and Zinatelli 1995; Iacovou, Benbasat et al. 1995; Proudlock, Phelps et al. 1999), and to negatively affect IS implementation success (Thong 2001) and IT growth (Cragg and King 1993) in SMEs. With the aforementioned background – where organizations devote huge resources acquiring ES
and many not receiving anticipated benefits, where the traditional market leveling with ES vendors moving into the SME market segment – this chapter discusses whether the organization size has an influence over the benefits brought-to-bear by the Enterprise System. This study aims to contribute to the encyclopedia by investigating the relationship of organizational size with the performance of the system (commonly referred to as System Success). Although prior research (Raymond 1985; DeLone 1988; Raymond 1992; Lai 1994) has contributed to our understanding of IS and organization size, few have empirically assessed influence of organizational size for contemporary IS success. More importantly, instead of resorting to the customary approach of considering large and medium-sized organizations as one homogenous group receiving equal benefits, this study aims to bring forth the differences between these two groups using four system related dimensions. This study presented herein investigates the influence of organization size on ES performance. ES impacts are empirically measured using information received from 310 responses representing 27 organizations that had implemented a market leading Enterprise System solutions in the second half of 1990. The chapter begins with a historical overview of literature on size as an important determinant. The broad contextual overview begins by differentiating characteristics of the medium vs. large organizations and demonstrating the impact of such contextual factors on System success. The research context is introduced next followed by discussions on the research methodology and data collection instrument. The final section demonstrates the observed differences between the two organizational sizes and research implications.
bAcKGrOUND Prior research suggests that organizational context is a determinant of Information System (IS) success. Researchers have concluded that medium
959
Size Matters!
organizations have distinctive and unique needs compared to large organizations (Raymond 1985; DeLone 1988; Lai 1994) and therefore, the research findings of large organizations cannot be generalized to small to mid-sized firms. Schultz and Slevin (1975) and Ein-Dor and Segev (1978) were among the very first point out the importance of organizational factors in managing Information Systems. In their early work, Ein-Dor and Segev (1978) proposed a framework after studying Management Information System (MIS) in which they identified organization size as a critical variable. Ein-Dor and Segev (1978) identified ten (10) organizational variables with direct or indirect influence on the impact of an IS. The identified variables are: (1) organization size, (2) maturity, (3) structure, (4) time frame, (5) psychological climate towards [CB] IS, (6) organizational situation, (7) rank of responsible executives, (8) location of responsible executives, (9) steering committee location and rank and (10) resources. They found that the organization size had special importance because of its influence on resource availability, requirements necessary for integration of professional units within an organization, degree of formalization of organizational systems, and lead time for planning and implementation. Furthermore, Ein-Dor and Segev (1978) recognized organization size as an uncontrollable variable and stated that [CB] IS projects are less likely to succeed in smaller organizations compared to larger organizations. Bilili and Raymond (1993) described SME decision making process as reactive, informal, and intuitive. (Doukidis, Lybereas et al. 1996; Proudlock, Phelps et al. 1999) asserted the opportunistic, day-to-day focus of the small to mid-sized organizations in relation to Information Systems. Whisler (1970) studied nineteen insurance companies and reputed that firm size was directly related to performance of IS. Cheney (1983) identified various factors that would affect a small business firm’s success or failure in using information systems and found three areas of dif-
960
ficulty associated in small businesses information systems: (1) software problems, (2) hardware problems and (3) implementation problems. DeLone (1981) studied the relationship between the size of a manufacturing firm and IS usage and concluded that firm size is: (1) directly related to the age of the firm’s computer operations, (2) inversely related to the amount of external programming that is used, (3) directly related to the portion of revenues allocated to Electronic Data Processing (EDP), and (4) inversely related to the percentage of EDP costs that are used for computer equipment. He also explained that smaller firms experience more computer related problems than their larger counterparts. Melone (1985) found that managers in small to mid-sized organizations rate accounting and inventory control as the most frequently used and important applications, and reported that inventory control was the most problematic aspect of computer usage in such organizations. Nickel and Seado (1986) reputed similar findings using 121 small businesses stating that budgeting and inventory control were the primary uses of IS in small organizations. Farhoomand and Hrycyk (1985) reported lack of technical staff as a substantial issue for the small to mid-sized companies. A study by Cooley et al. (1987) identified the importance of user-friendly interfaces and lower implementation costs as key factors affecting end users in small to mid-sized organizations. Montazemi (1988) investigating the aforementioned preposition, confirmed the impact of organization size on end user satisfaction. An organization has two basic options when it decides to implement a computerized application; (1) to have its own staff develop the software, or (2) to acquire packaged software from a vendor (Raymond, 1985). Turner (1992) stated that as a firm increases in size, it would demand more sophisticated software. Even though that this argument is intuitive, it suggests a correlation between organization size and the adoption of package software. Turner (1992) specifically emphasized
Size Matters!
the importance of small to mid-tier organizations obtaining assistance from external sources rather than developing applications in house. To the contrary, Raymond (1985) found that small firms are capable of developing, implementing and administering their own applications in-house, compared to their larger counterparts, specifying that small organizations could maintain an IS with minimal financial, technical and personnel requirements. The resource constraints has lead SMEs to follow an incremental approach to IT investments, which, in turn, may result in isolated and incompatible systems, as well as decreased flexibility (Levy and Powell 1998). Raymond (1992) emphasized on the advantages of small to mid-sized firms in developing in-house applications rather than adopting packaged software from commercial vendors. They further added that end user computing, (where the user have direct control over their computing needs) is more appropriate to such organizations than adopting a packaged software application. Many researchers have alluded to the skill scarcity for information systems in Small to mid-sized organizations (Bilili and Raymond 1993; Levy and Powell 1998; Mitev and Marsh 1998). Laukkanen et al. (2007) suggest that the resource constraints faced by SMEs may hinder their ability to maintain technology up to date, while at the same time forcing them to consider their investments in IT as something that should last for a long time (Levy and Powell 1998). Soh et al. (1992) investigated the importance of external consultants on computerization success in small businesses, concluding that the level of computer system usage in small businesses with consultants is higher than that of small businesses without consultants. Further, they added that small businesses that engage consultants are less likely to complete there IS project on time and within budget. Harrison (1997) using the Theory of Planned Behaviour (TPB) to explained technol-
ogy adoption. They found that as business size increased, the importance of expectations from the [social] environment increased. However, they observed a negative correlation with the importance of intra-firm consequences and control over the potential barriers for IS adoption. Hong and Kim (2001) explored the ‘fit perspective’ in 34 Enterprise System installations where organizational size was implicitly considered as a critical contingency variable. In classifying organizations in the small → medium → large spectrum, many authors use the number of employees as the sole classifier. For example in a recent study (Laukkanen, Sarpola et al. 2007), SMEs are defined here as enterprises with fewer than 250 employees, wherein the small organizations defined as companies with less than 50 employees and large organizations are simply classified as those companies that do not meet the definition of SMEs and have more than 250 employees (Chau 1994; Chau 1995). However, it is unrealistic to associate a 50 staff (or less) organizations with a large scaled traditional Enterprise System implementations as such systems are targeted at larger counterparts. Though the number of employees in a company provides some indication to the size of the Enterprise System, at times it can be quite misleading due to not all employees having access to an Enterprise System. For example, in a Health and Pharmaceutical organization – where the majority of the staff is on medical duties (e.g. doctors and nurses) – the actual Enterprise System users will be a small proportion of the total number of employees. In recent years, Sedera et al (2003) suggest the use of number of user licenses to determine the size of the organization for Enterprise System discussions. They suggest keeping 1000 (or over) concurrent user licenses as a benchmark for large organizations and anything below classifying as medium enterprises.
961
Size Matters!
tHE stUDy cONtEXt
tHE sUrVEy
The empirical data collection was conducted across 27 public sector agencies running live the market leading Enterprise System. These 27 agencies were the first Australian state agencies to have implemented common financial management software state-wide namely. In 1995 the state Government commenced implementation of the Financials module across all state Government agencies (later followed by Controlling, Materials Management and in some agencies Human Resources) and soon became one of the largest Enterprise System installations in Australia. The state Government approach was very much focused on using the Enterprise System as a common reporting and financial management tool. The objectives of the new financial system were to provide a financial management system to state Government agencies that would: (1) support the ‘Managing for Outcomes’ (MFO) framework and financial management improvement activities, (2) encourage best practice resource management across state Government, (3) facilitate the consolidation of state Government financial information, (4) meet the business needs of agencies and (5) achieve economies of scale in main operations. Despite the claimed benefits by most of the agencies, a relatively smaller agency that provides corporate services to a group of other agencies demonstrated their dissatisfaction with their Enterprise System. Even though the Enterprise System provided rich functionality to this organization, the senior management believed that the system in place was too complex and too expensive to operate in a smaller organization. After three years of using the implemented Enterprise System, the agency decided to replace that with a local, small scaled Enterprise System. The contextual background further questions the preposition in discussion – whether the organizational size influences the benefits you receive.
A survey instrument was designed to operationalize 27 measures of ES-success depicted in Figure 1 (See details in (Gable, Sedera et al. 2003; Sedera and Gable 2004). All items were scored on a seven-point Likert scale with the end values (1) ‘Strongly disagree’ and (7) ‘Strongly Agree’, and the middle value (4) ‘Neutral’. The draft survey instrument was pilot tested with a selected sample of staff of the state Government Treasury Department. Feedback from the pilot round respondents resulted in minor modifications to survey items. The survey gathered additional demographic details on respondents’ employment title (e.g. Director, Business Analyst, Application programmer). Furthermore, the respondents were asked to provide a brief description of their involvement with the Enterprise System. Supplementary information on the organizational structure, characteristics of the Enterprise System (i.e. modules in use and hardware in place) and the number of users in each agency was gathered from objective sources. In addition to the 27 items of Figure 3, the questionnaire included two criterion items aimed at gauging the respondent’s perception of overall ES-success: (1) ‘overall…the impact of [the name of the Enterprise System] on the agency has been positive’ and (2) ‘overall… the impact of [the name of the Enterprise System] on me has been positive’.
962
rEsULts AND ANALysIs A total of three hundred and nineteen (319) responses from twenty-seven (27) public sector agencies were received. Nine responses were removed from the analysis due to missing values and perceived frivolity. Using the number of SAP user licenses, the sample was divided into two
Size Matters!
Figure 1. The measures of the ES-success measurement model
Figure 3. Results of t-test for the criterion measure (alpha = 0.05)
mutually exclusive representing the respondents from medium organizations and large organizations. Organizations with more than 1000 SAP user licenses considered as large agencies and the rest were medium agencies. Additional criteria were established (i.e. Number of employees, dispersion of the organization) to be used in the grouping exercise to supplement the principal criterion, where the initial classification was unclear. Figure 2(a) shows the break down of organizations, classified into medium and large organizations and Figure 2(b) shows the classification of respondents segregated into the two agency cohorts. All indications suggest that this distribution is representative of users of the Enterprise System in the state Gov-
ernment. All participated agencies having: (1) the same Enterprise System software application, (2) the similar versions of the Enterprise System, (3) in the same phase of the ES life cycle, and (4) installed Financial Accounting and Controlling, Materials Management modules created a unique homogeneous environment increasing the comparability of the data. The one-way analysis of variance (ANOVA) F-test was chosen as the method for conducting the analysis of the Likert scale data. For each variable measured with Likert scale the statistics reported include the arithmetic mean (mean) and standard deviation (std dev.) of the responses in each company group, the significance of group
Figure 2. Respondent classification
963
Size Matters!
mean differences (sig.) indicated by F-test, and the group sizes (n). If a significant difference was found in ANOVA at the 0.05 level, paired t-tests comparisons were conducted to see which of the company groups differ from each other. The criterion item (Overall, the impact of [the name of the Enterprise System] on the agency has been positive) was used to establish the peripheral differences between the two cohorts. Analysis of our survey data indicates that significant differences exist between the medium-sized, and large organization with a high F value of 5.22. It is also observed that the large organizations demonstrate a higher mean score for the criterion item over the medium organizations. Encouraged by the findings above, using the averages of each of the four success dimensions, we conducted an independent sample t-test to further explore the differences between the two organizational sizes with regards to the dimensions. Results depicted in Figure 4 indicate significant differences between the two organizational sizes in three of the four success dimensions. The differences are observed in System Quality, Individual Impacts and Organization Impacts. Having determined that two organizational sizes demonstrate significant differences in relation to the Enterprise System success dimensions, the chapter now attempts to identify where those differences exists and whether an Enterprise System investment favors a particular organizational size. In order to establish this, we now look at the
mean scores of each success dimension and their corresponding measures.
systEM QUALIty The quality of a system under investigation is a multifaceted phenomenon. The system quality construct is designed to capture how the system performs from a technical and design perspective. Nine validated measures have been employed in this study. Results depicted in Figure 5, it is evident that for all 09 measures on Enterprise System Quality, large organizations demonstrate relatively larger mean scores compared to their smaller counterparts. Moreover, the following observations are made in relation to the measures of system quality. All, but two measures (flexibility and customization), have received below scalemedian (4) values for the large organizations. To the contrary, for medium organizations, 6 of the 9 measures report below scale-median scores. The two cohorts demonstrate substantial differences in mean scores across IT-sophistication and the employment of system features. Similarly, the mean scores on Enterprise System meeting the requirements of the organization demonstrate differences between the medium and large cohorts, with large reporting high mean scores. It is noticed with some interest that system centric attributes such as: integration and customization do not demonstrate strong differences between the two cohorts.
Figure 4. Results of t-test for the success dimensions (alpha = 0.05)
964
Size Matters!
Figure 5. Mean scores for system quality
INFOrMAtION QUALIty Measures of information quality focus on the output – both on-screen and reports – produced by the system, and the value, usefulness or relative importance attributed to the output by the users. In an early leading study of IS success, Bailey and Pearson (1983) identified nine characteristics of information quality: accuracy, precision, currency, timeliness, reliability, completeness, conciseness, format and relevance. Sirinivasan (1985) added ‘understandability’ of information as another important sub-construct; while Saaksjarvi and Talvinen (1993) employed content, availability, accuracy as sub-construct measures of information quality in their study of marketing information systems. Rainer (1995) found accuracy, timeliness,
conciseness, convenience and relevance as being key aspects of Executive Information Systems information quality. Results of the exploratory survey and expert workshops revealed contextspecific measures of information quality and thus significant changes have been made to the sub-constructs of information quality. The study employs the six validated measures from the ESSuccess of Gable Sedera et at 2003; Sedera Gable 2004. The Figure 6 depicts the mean scores for the six measures of Information Quality. Similar to System Quality dimension discussed above, large organizations demonstrate a higher mean score for all six measures. Though all mean-scores reported for Information Quality were higher for large organizations, substantial differences were observed only in
Figure 6. Mean scores for information quality
965
Size Matters!
relation to Information Relevance and the format of the reports.
INDIVIDUAL IMPAct Individual impact is concerned with how the Enterprise System influences the performance of the individual user. Individual impact tends to encompass a broad range of subjective measures such as: confidence in decisions made, improvements in decision-making, and the time to reach a decision (Ein-Dor, Segev et al. 1981; Sirinivasan 1985; Kim and Lee 1986). Dickson et al. (1977) provided early insights into Individual Impact citing decision quality, decision time, decision confidence, and estimated outcomes. Though individual productivity and decision making effectiveness have been mentioned in prior studies as impacts from the system, the potential benefits from the Enterprise System exceeds those and includes aspects such as – facilitating organizational learning and information recall and awareness through organizational transparency. Observing mean scores reported in Figure 7, similar patters are observed to the previous two success dimensions with large organizations receiving more benefits from the Enterprise System. More importantly, the mean score analysis demonstrates a bleak picture of the amount of Figure 7. Mean scores for individual impacts
966
individual benefits received to the medium scaled organizations with substantial differences in all four perspectives.
OrGANIZAtIONAL IMPActs The impact of an Enterprise System on organizational performance is some-what difficult to isolate from the general organizational performance indicators. The eight measures of the ES-success measurement model purportedly isolate the impact of the system with the one of the organization. The analyses of the mean scores for the 08 measures support all observations above favoring the large organizations. It is also observed that the medium organizations report substantially lower mean scores for five out of the eight measures. The biggest differences were seen in facilitation of e-business through Enterprise Systems and the introduction of optimal business process changes. The three cost related aspects: (1) reduction of organization costs, (2) reduction of staff and (3) operational costs reductions have received below median scores for both organizational cohorts (See Figure 8 for details).
Size Matters!
Figure 8. Mean scores for organization impacts
cONcLUsION This chapter explored whether there are significant differences exist between medium-sized and large enterprises regarding the success of the Enterprise System. In doing so, it gathered data from 310 respondents from 27 organizations that had implemented a market leading Enterprise System in the second half of 1990. The impact of the Enterprise System was empirically measured using validated constructs measures of the Enterprise System Success Measurement Model (Gable, Sedera et al. 2003; Sedera and Gable 2004). The findings indicate that organization size undoubtedly has a strong influence over the benefits received from the system. Though there have been other studies investigating various aspects of Enterprise System (e.g. differences between small, medium and large organizations on Enterprise System initial objectives and constraints of adoption (Laukkanen, Sarpola et al. 2007), implementation issues (Mabert, Soni et al. 2003), this is one of the first to demonstrate the differences in relation to the outcomes of the system. The homogeneity in the study context – where all the sampled organization having implemented the same Enterprise System, similar modules and are at the same phase of the lifecycle – provided a distinct strength to the study, where the results are less vulnerable to extraneous factors. The study
employed the number of Enterprise System User Licenses to classify the organizations in to the two organizational sizes. The authors argued the bias and the influence that other factors (e.g. number of employees and budget) may bring forth in a study of this nature. The results empirically provide evidence to a well-known anecdote that traditional Enterprise Systems are better suited for the Large Organizations. The results demonstrated significant differences between the medium and large organizations in relation to Enterprise System Quality, Impacts to the Individuals, and Impacts to the Organization. No differences were observed in relation to the Quality of Information derived from the system. Substantial differences between the two types of organizations were observed in relation to Individual and Organizational impacts – raising concerns over the suitability of Enterprise Systems for medium sized organizations. The result also demonstrated that some of the common system related issues, such as customization, are equally deterrent to both organizational types. Similarly, the innate Enterprise System advantages like the integration are equally beneficial to both organizational types. We recognize two attributes that may have contributed to the under-performance of Enterprise System in medium tiered organizations. The economies of scale could be one the lead-
967
Size Matters!
ing factors that hamper the results of medium tiered organizations. From the study findings it is evident that medium sized organizations have received reasonable benefits through System and Information Quality dimensions. However, the stark differences observed in Individual and Organization Impacts suggests that though the system and information quality were adequate, mid-sized companies have failed to attain cost, productivity and resource benefits. Specifically, in resource-demanding ERP investments, the larger enterprises have been found to be able to take advantage of economies of scale and, hence, compared to their larger counterparts, smaller companies are faced with a relatively bigger commitment when adopting Enterprise Systems (Mabert, Soni et al. 2000). Secondly, the resource limitations characterized in medium organizations is identified as another probable contributor to the poorer success reported by the medium-sized companies. Akin to a popular view where practitioners argue that ‘implementing an Enterprise System is just the beginning’, organizations are required to make continuous investments into optimizing an Enterprise System. The resource scarcity of mid-sized organizations may not allow further investments into the Enterprise System on training, upgrades, business process improvements and organizational change management practices. The findings are particularly important to the IT practitioners (and academics alike) to understand the diversity of impacts received from Enterprise System and the importance of contextual factors. At a time where the Enterprise System vendors are moving aggressively towards scaled-down systems specifically targeting at small organizations, the study results provide some caution over the claimed benefits of Enterprise Systems.
968
rEFErENcEs Bailey, J. E., & Pearson, S. W. (1983). Development Of A Tool For Measuring And Analyzing Computer User Satisfaction. Management Science, 29(5), 530–545. doi:10.1287/mnsc.29.5.530 Baker, W. H. (1987). Status of information management in small businesses. Journal of Systems Management, 38(4), 10–15. Bilili, S., & Raymond, L. (1993). Information technology: Threats and opportunities for small and medium-sized enterprises. International Journal of Information Management, 13(6), 439–448. doi:10.1016/0268-4012(93)90060-H Chau, P. Y. K. (1994). Selection of packaged software in small businesses. European Journal of Information Systems, 3(4), 292–302. doi:10.1057/ ejis.1994.34 Chau, P. Y. K. (1995). Factors used in the selection of packaged software in small businesses: views of owners and managers. Information & Management, 29(2), 71–78. doi:10.1016/03787206(95)00016-P Cheney, P. H. (1983). Getting The Most Out Of Your First Computer System. American Journal of Small Business, 7(4), 476–485. Cooley, P. L., & Walz, D. T. (1987). A Research Agenda For Computers And Small Business. American Journal of Small Business, 11(3), 31–42. Cragg, P. B., & King, M. (1993). Small-firm computing: motivators and inhibitors. MIS Quarterly, 17(1), 47–60. doi:10.2307/249509 Cragg, P. B., & Zinatelli, N. (1995). The evolution of information systems in small firms. Information & Management, 29(1), 1–8. doi:10.1016/03787206(95)00012-L
Size Matters!
d’Amboise, G., & Muldowney, M. (1988). Management theory for small business: attempts and requirements. Academy of Management Review, 13(2), 226–240. doi:10.2307/258574 DeLone, W. H. (1981). Firm Size And The Characteristics Of Computer Use. MIS Quarterly, 5(4), 65–77. doi:10.2307/249328 DeLone, W. H. (1988). Determinants Of Success For Computer Usage In Small Business. MIS Quarterly, 12(1), 50–61. doi:10.2307/248803 Dickson, G., & Senn, J. (1977). Research In Management Information Systems: The Minnesota Experiments. Management Science, 23(9), 913–923. doi:10.1287/mnsc.23.9.913 Doukidis, G. I., & Lybereas, P. (1996). Information systems planning in small businesses: A stage of growth analysis. Journal of Systems and Software, 33(2), 189–201. doi:10.1016/01641212(95)00183-2 Ein-Dor, P., & Segev, E. (1978). Organizational Context And The Success Of Management Information Systems. Management Science, 24(10), 1064–1077. doi:10.1287/mnsc.24.10.1064 Ein-Dor, P., Segev, E., et al. (1981). Use Of Management Information Systems: An Empirical Study. Proceedings of the 2nd International Conference on Information Systems, Cambridge, Massachusetts, Association for Information Systems. Everdingen, Y., & Hillegersberg, J. (2000). ERP adoption by European midsize companies. Communications of the ACM, 43(4), 27–31. doi:10.1145/332051.332064 Farhoomand, F., & Hrycyk, G. P. (1985). The Feasibility Of Computers In The Small Business Environment. American Journal of Small Business, 9(4), 15–22.
Gable, G., Sedera, D., et al. (2003). Enterprise Systems Success: A Measurement Model. Proceedings of the 24th International Conference on Information Systems, Seattle, Washington, Association for Information Systems. Harrison, D. A., Mykytyn, J. P. P., & Riemenschneider, C. K. (1997). Executive Decisions about Adoption of Information Technology in Small Business: Theory and Empirical Tests. Information Systems Research, 8(2), 171–196. doi:10.1287/ isre.8.2.171 Hillegersberg, J. V., & Kumar, K. (2000). ERP experience and evolution. Communications of the ACM, 43(4), 23–26. Hong, K.-K., & Kim, Y.-G. (2001). The Critical Success Factors For ERP Implementation: An Organizational Fit Perspective. Information & Management, 40(1), 25–40. doi:10.1016/S03787206(01)00134-3 Iacovou, C. L., & Benbasat, I. (1995). Electronic data interchange and small organizations, adoption and impact of technology. MIS Quarterly, 19(4), 465–485. doi:10.2307/249629 Kim, E., & Lee, J. (1986). An Exploratory Contingency Model Of User Participation And MIS Use. Information & Management, 11(2), 87–97. doi:10.1016/0378-7206(86)90038-8 Lai, V. S. (1994). A Survey Of Rural Small Business Computer Use: Success Factors And Decision Support. Information & Management, 26(6), 297–304. doi:10.1016/0378-7206(94)90027-2 Laukkanen, S., & Sarpola, S. (2007). Enterprise size matters: objectives and constraints of ERP adoption. Journal of Enterprise Information Management, 20(3), 319–334. doi:10.1108/17410390710740763
969
Size Matters!
Levy, M., & Powell, P. (1998). SME flexibility and the role of information systems. Small Business Economics, 11(2), 183–196. doi:10.1023/A:1007912714741
Rainer, J. K. R., & Watson, H. J. (1995). The Keys to Executive Information System Success. Journal of Management Information Systems, 12(2), 83–99.
Mabert, V. A., & Soni, A. (2000). Enterprise Resource Planning Survey Of U.S. Manufacturing Firms. Production and Inventory Management Journal, 41(2), 52–58.
Raymond, L. (1985). Organizational Characteristics And MIS Success In The Context Of Small Business. MIS Quarterly, 9(1), 37–52. doi:10.2307/249272
Mabert, V. A., & Soni, A. (2003). The impact of organizationsize onen terprise resource planning (ERP) implementations in the US manufacturing sector. Omega, 31, 235–246. doi:10.1016/S03050483(03)00022-7
Raymond, L., & Bergeron, F. (1992). Personal DSS success in small enterprises. Information & Management, 22(5), 301–308. doi:10.1016/03787206(92)90076-R
Melone, S. C. (1985). Computerising small business information systems. Journal of Small Business Management, (April): 10–16. Montazemi, A. R. (1988). Factors Affecting Information Satisfaction In The Context Of The Small Business Environment. MIS Quarterly, 12(2), 238–256. doi:10.2307/248849 Nickell, G. S., & Seado, P. C. (1986). The Impact Of Attitudes And Experience On Small Business. American Journal of Small Business, 10(1), 37–48. Pan, S. L., Newell, S., et al. (2001). Knowledge Integration As A Key Problem In An ERP Implementation. Proceedings of the 22nd International Conference on Information Systems, New Orleans, Louisiana, Association for Information Systems. Piturro, M. (1999). How midsize companies are buying ERP. Journal of Accountancy, 188(3), 41–48. Proudlock, M. J., & Phelps, B. (1999). IT adoption strategies: Best practice guidelines for professional SMEs. Journal of Small Business and Enterprise Development, 6(4), 240–252. doi:10.1108/ EUM0000000006678
970
Saaksjarvi, M. T. V., & Talvinen, J. M. (1993). Integration And Effectiveness Of Marketing Information Systems. European Journal of Marketing, 27(1), 64–79. doi:10.1108/03090569310024567 Schultz, R. L., & Slevin, D. P. (1975). Implementation and organisational validity: An empirical investigation. Implementing operational research / management science. R. L. Shultz and D. P. Slevin. New York, Elsevier, North-Holland: 153-182. Sedera, D., & Gable, G. (2004). A Factor and Structural Equation Analysis of the Enterprise Systems Success Measurement Model. International Conference of Information Systems, Washington, D.C. Sedera, D., Gable, G., et al. (2003). ERP Success: Does Organization Size Matter? Proceedings of the 7th Pacific Asia Conference on Information Systems, Association for Information Systems. Shang, S., & Seddon, P. B. (2002). Assessing And Managing The Benefits Of Enterprise Systems: The Business Manager’s Perspective. Information Systems Journal, 12(4), 271–299. doi:10.1046/j.1365-2575.2002.00132.x Sirinivasan, A. (1985). Alternative Measures Of System Effectiveness: Associations And Implications. MIS Quarterly, 9(3), 243–253. doi:10.2307/248951
Size Matters!
Soh, C. P. P., & Yap, C. S. (1992). Impact of consultants on computerisation success in small businesses. Information & Management, 22, 309–319. doi:10.1016/0378-7206(92)90077-S Thatcher, M. E., & Oliver, J. R. (2001). The impact of technology investments on a firm’s production efficiency, product quality, and productivity. Journal of Management Information Systems, 18(2), 17–45. Thong, J. Y. L. (2001). Resource constraints and information systems implementation in Singaporean small business. Omega, 29(2), 143–156. doi:10.1016/S0305-0483(00)00035-9 Turner, J. S. (1992). Personal DSS success in small business. Information & Management, 22, 301–308. doi:10.1016/0378-7206(92)90076-R Welsh, J. A., & White, J. F. (1981). A amall business is not a little big business. Harvard Business Review, 59(4), 18–32. Whisler, T. (1970). The Impact Of Computers On Organizations. New York, NY, Praeger Publishers.
KEy tErMs AND DEFINItIONs Enterprise System: Customizable, standard software solutions that have the potential to link and automate all aspects of the business, incorporating core processes and main administrative functions into a single information and technology architecture. Individual-Impact: A measure of the extent to which [the IS] has influenced the capabilities and effectiveness, on behalf of the organization, of key-users. Information-Quality: A measure of the quality of [the IS] outputs: namely, the quality of the information the system produces in reports and on-screen. Organizational-Impact: A measure of the extent to which [the IS] has promoted improvement in organizational results and capabilities. Public Sector: The public sector is the part of economic, administrative and Governance process that deals with the delivery of goods and services by and for the government. SAP: SAP [used to denote SAP R/3 software] is a market leading Enterprise System software. System-Quality: A measure of the performance of [the IS] from a technical and design perspective.
This work was previously published in Handbook of Research on Enterprise Systems, edited by Jatinder N. D. Gupta, Sushil Sharma and Mohammad A. Rashid, pp. 218-231, copyright 2009 by Information Science Reference (an imprint of IGI Global).
971
972
Chapter 4.5
Web Services as XML Data Sources in Enterprise Information Integration Ákos Hajnal Computer and Automation Research Institute, Hungary Tamás Kifor Computer and Automation Research Institute, Hungary Gergely Lukácsy Budapest University of Technology and Economics, Hungary László Z. Varga Computer and Automation Research Institute, Hungary
AbstrAct More and more systems provide data through web service interfaces and these data have to be integrated with the legacy relational databases of the enterprise. The integration is usually done with enterprise information integration systems which provide a uniform query language to all information sources, therefore the XML data sources of Web services having a procedural access interface have to be matched with relational data sources having a database interface. In this chapter the authors provide a solution to this problem by describing the Web service wrapper component of the SINTAGMA Enterprise Information Integration system. They demonstrate Web services as XML DOI: 10.4018/978-1-60566-330-2.ch005
data sources in enterprise information integration by showing how the web service wrapper component integrates XML data of Web services in the application domain of digital libraries.
INtrODUctION Traditional Enterprise Information Integration focuses mainly on the integration of different relational data sources, however recent enterprise information systems follow the service oriented architecture pattern and are based on web services technology1. In addition, more and more information and service providers on the internet provide web service interface to their system. The integration of these new information sources requires that the Enterprise Information Integration system has an interface towards web services.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Web Services as XML Data Sources in Enterprise Information Integration
This chapter describes a solution to this problem using the SINTAGMA Enterprise Information Integration System2 and extending this system with a Web Service Wrapper component (which is the main contribution of this chapter). SINTAGMA is a data centric, monolithic information integration system supporting semi-automatic integration of relational sources using tools and methods based on logic and logic programming (see Benkő et al. 2003) based on the SILK tool-set which is the result of the SILK (System Integration via Logic & Knowledge) EU project. In order to prepare for the challenge of integrating XML data provided by web services, we extended the original SINTAGMA system in two directions. First, the architecture of the SINTAGMA system was changed significantly to be made up of loosely coupled components rather than a monolithic structure. Second, the functionality has become richer as, among others, the system now deals with Web Services as information sources. The component responsible for this is the Web Service Wrapper which is the main contribution of this chapter. Mixing relational data sources and web services during an information integration scenario can be very useful as demonstrated by a use case by Lukácsy et al. 2007 and poses the challenge of representing procedural information as relational data. This chapter is structured as follows. First we put the problem in the context of related work, then we describe the main ideas behind the SINTAGMA system in a nutshell, then we provide an overview of the basic web service concepts and the modelling language of SINTAGMA, then we present how we model and query web services, with samples. Finally, we demonstrate web service integration in a digital library application and summarize our application experiences and conclusions.
rELAtED WOrK There are several completed and ongoing research projects in using logic-based approaches for Enterprise Application Integration (EAI) and Enterprise Information Integration (EII) as well. The generic EAI research stresses the importance of the Service Oriented Architecture, and the provision of new capabilities within the framework of Semantic Web Services. Examples for such research projects include DIP (see Vasiliu et al. 2004) and INFRAWEBS (see Grigorova 2006). We have also approached the EAI issue from the agent technology point of view (see Varga et al. 2005 and Varga et al. 2004). These attempts aim at the semantic integration of Web Services, in most cases using Description Logic based ontologies, agent and Semantic Web technologies. The goal of these projects is to support the whole range of EAI capabilities like service discovery, security and high reliability. Most of the logic-based EII tools use description logics and take a similar approach as we did in SINTAGMA, that is, they create a description logic model as a view over the information sources to be integrated. The basic framework of this solution is described e.g. by Calvanese et al. 1998. The disadvantage is that these types of applications deal with relational sources only and are therefore not applicable to process modeling. This chapter unifies the procedural EAI approach and the relational EII approach by integrating relational and functional XML information sources within the SINTAGMA system. The advantage of this approach is that the integration team does not have to implement web service interface to relational databases nor relational database interface to web services, because the SINTAGMA system automatically integrates the different sources. In addition to the integration, the SINTAGMA system includes several optimizations when answering queries on the integrated system.
973
Web Services as XML Data Sources in Enterprise Information Integration
The integration of web services with the relational data sources includes two important tasks: modeling the web services in the SINTAGMA system and querying the XML data returned by the web service. Modeling web services in SINTAGMA is a reverse engineering task that seems to be straightforward, however it is necessary. Most tools available for modeling web services represent the opposite approach: they create WSDL from UML. Although there exist tools for modeling WSDL in UML (e.g. http://wsdl2xmi.tigris.org/) or modeling XSD in UML (e.g. supported by an XML editor), we did not find a tool that combines the two in the appropriate way from our point of view. WSDL modeling tools focus on the structure of the WSDL, but do not provide the model of the message schema contained (or imported) within the WSDL. XSD modeling tools do not provide information about WSDL specific information such as SOAP protocols and network locations. Another problem is that, although models in SINTAGMA are similar to UML, models generated by the available tools cannot be used directly because SINTAGMA has an own modeling language (called SILan) and not all UML components/ features are supported by SINTAGMA. These are the reasons why the new modeling procedure (described in this chapter) is needed. There are tools for querying XML, the most well-known tools are XPATH and XQUERY. We studied the possibility of transforming SQL-like queries supported by SINTAGMA to XQUERY statements. However we found that the SQL-like query language and XQUERY are essentially different. XQUERY is based on the XML instance, and not on the schema: it is possible to query XML fragments in the XML instance given by XPATH expressions, but not XML fragments corresponding to specific complex type definition (class instances in our terms). The problem is that, if the schema is recursive, to query the instances of a complex type would require (theoretically) infinite number of XPATH expressions.
974
Another problem was that the results provided by XQUERY require further transformation before it is returned to SINTAGMA. For these reasons we decided to implement a query engine as described in this chapter.
tHE sINtAGMA APPrOAcH The main idea of our approach is to collect and manage meta-information on the sources to be integrated. These pieces of information are stored in the model warehouse of the system in a form of UML-like models, constraints and mappings. This way we can represent structural as well as nonstructural information, such as class invariants, implications, etc. All of our modeling constructs have well defined semantics. The process of querying these models is called mediation. Mediation decomposes complex integrated queries to simple queries answerable by individual information sources and, having obtained data from these, composes the results into an integrated form. For mediation, we need mappings between the separate sources and the integrated model. These mappings are called abstractions because often they provide a more abstract view of the notions present in the lower level models. We handle models of different kinds. From one point of view we can speak of unified models and local models. Unified models are created from other ones in the process of integration, while the local models represent particular information sources. More importantly, we distinguish between application and conceptual models. The application models represent the structure of an existing or potential system and because of this they are fairly elaborate and precise. Conceptual models, however, represent mental models of user groups, therefore they are more vague than application models. Access to heterogeneous information sources is supported by wrappers. Wrappers hide the syn-
Web Services as XML Data Sources in Enterprise Information Integration
tactic differences between the sources of different kind (e.g. RDBMS, XML, Web services, etc.) by presenting them to upper layers uniformly as UML models. Wrappers also support queries over these models as they are capable of directly accessing the types of data sources they are responsible for.
WEb sErVIcE INtEGrAtION In the following we briefly overview the most important concepts of web services and introduce the modeling language of SINTAGMA called SILan. Then we describe in detail how different web services can be represented by models and queried in the system. Finally, we discuss sample web service models.
Web services Web services aim to provide some document or procedure-oriented functionality over the network that can be accessed in a standardized way, typically using SOAP (Simple Object Access Protocol3) message exchange over HTTP. SOAP is an XML based protocol for exchanging structured and typed information between peers. SOAP messages consist of an element followed by child element . Body entries will be referred to as the message content throughout this chapter. The interface of a web service is described in WSDL (Web Services Description Language4), which is based on XML. In WSDL, a set of operations with input and output messages are described abstractly to define a network endpoint. These endpoints are then bound to concrete protocol and message serialization format. A web service is defined as a collection of ports that are bound to the network endpoints defined previously. The location of ports and protocol bindings are specified by SOAP extensibility elements in the WSDL. Messages are defined using the XSD (XML Schema Definition5) type system.
WSDL is structured as follows. Within the root element, child element encapsulates the XSD definitions (XSD schema) of different data types and structures used in message contents. It is followed by a series of declarations that refer (in the elements) to the types defined previously. element(s) wrap a sequence of elements representing abstract operations, each having and (and optional ) elements that refer to the defined messages. element(s) specify the transport protocol and message formats for the set of operations listed in . Finally, the element contains one or more elements, each linked to a , and a network location in the child element. (We use the soap namespace prefix to indicate elements belonging to the SOAP URI. WSDL namespace prefixes are omitted. XSD elements will be prefixed by xs.) In this chapter, we consider web services conforming to Basic Profile Version 1.1 of Web Service Interoperability Organization6 . For simplicity, we assume document-literal style messaging protocol, one targetNamespace in the XSD type definition, and one service in the WSDL with one port and (potentially) several operations. In document-literal style, message contents are entirely defined by the XSD schema within the WSDL. We note that none of the above constraints are theoretical limitations of our approach, and are typically met by web services in practice.
Modeling in sINtAGMA Different data sources are modeled uniformly in SINTAGMA using the modeling language of the system called SILan. This language is based on UML (Unified Modeling Language, see Fowler & Scott 1998) and Description Logics (see Horrocks 2002), and the syntax resembles IDL7, the Interface Description Language of CORBA.
975
Web Services as XML Data Sources in Enterprise Information Integration
The main constructs are classes and associations, since these are the carriers of information. A class denotes a set of entities called the instances of the class. Similarly, an n-ary association denotes a set of n-ary tuples of class instances called links. In a binary association one of the connections can be declared composite, which means that the instance at the composite end is part of the instance at the other end (and is not part of any other instance). Composition associations are also referred to as compositions for short. Connections of associations can be declared as input, which means that the association can only be queried if all the input ends are available. Associations also have multiplicity that is used to define cardinality constraints, e.g. one-to-one, one-to-many relations. Classes and associations have unique name within a model. Classes can have attributes which are defined as functions mapping the class to a subset of values allowed by the type of the attribute. Attributes have unique names within a class and are one of the SINTAGMA supported types. Invariants can be specified for classes and associations. Invariants give statements about instances of classes (and links of associations) that hold for each of them. Invariants are based on the language OCL (Object Constraint Language, see Clark & Warmer 2002.).
Modeling Web services Modeling web services in SINTAGMA basically means the construction of a SILan representation of data structures and data types used in communication with the web service. The schemes of different web service messages are defined by an XSD language description in the element in the WSDL (or imported here). The schema typically consists of a set of element declarations, simple and complex type definitions. Element declarations declare named XML elements with type corresponding to a built-in XML type (e.g. int, string), simple or
976
complex type definition. Simple type definitions restrict built-in XML types (or other simple types) by giving enumerations, minimum, maximum values, etc. Complex type definitions combine a set of element declarations, or declare XML attributes, respectively. To each element declaration cardinality can be assigned to specify optional or multiple occurrences for the element. Simple and complex type definitions can also be extended, restricted by other type definitions. We note that there are many features of XSD omitted here for clarity considerations. The SINTAGMA model is obtained by transforming the XSD description to a SILan representation. A unique SILan class is assigned to each complex type with unique name. Simple type element declarations (having a built-in XML type or simple type definition) within a complex type are added as SILan attributes of the class. The name of the attribute is given the name attribute of the element declaration, and the SILan type is derived from the built-in XML type according to a predefined mapping. Complex type element declarations within a complex type are represented by composition associations between classes assigned to different complex types. Compositions are named uniquely, connection end aliases are given by the name attribute of the element declarations (used at navigation), and occurrence indicators (minOccurs, maxOccurs) are converted to the appropriate multiplicity of the composition (e.g. 1..1, 1..*). Simple type element declarations with multiple occurrences cannot be represented as simple class attributes in SILan. Therefore separate classes are created wrapping simple types that are then connected to the original container class by composition association with the appropriate multiplicity. Optional attributes cannot be expressed in SILan. Their values are simply set to null at query (see next section) if they are absent, instead of creating compositions with optional multiplicity. The default String type is assigned to XML types that cannot be represented precisely in SILan (e.g. date). These
Web Services as XML Data Sources in Enterprise Information Integration
types will hold the string content of the related XML element. Simple type restrictions are added as attribute invariants, complex type extensions are indicated by inheritance relations between the corresponding classes. Message schemes modeled above are then associated with web service operations: an association is created between classes representing input and output messages for each operation in the WSDL. Associations are named uniquely, and end connections corresponding to input messages are labeled as in SILan (angle quotes notation corresponds to UML’s stereotype notation). Connection aliases of associations are given the element names wrapping input and output XML messages of the operation (referred in element in WSDL). An example WSDL fragment of a simple add web service is shown in Figure 1 together with the created SILan model and the corresponding UML class diagram. The constructed model contains every single data that can be passed to or returned by the web service in terms of classes, compositions and attributes, as well as the different web service op-
erations which are represented by associations. Web service invocation, however, requires additional details of the WSDL that are stored as metadata in the model (used by the Web Service Wrapper component). One is the network location of the web service that is obtained from the element of the port. The other is the namespace of the XML messages used in the communication with the web service that is given by the tartgetNamespace attribute of the element. In practice, web services can be more complicated. It may occur that a web service uses several schemes and namespaces that require introducing namespace metadata into different classes instead of using a single, global namespace in the model. A WSDL can declare several ports combining web service operations at different network locations. In this case, the network location(s) need to be assigned to the associations representing operations instead of the model. When a web service uses rpc protocol (), elements that are declared at input and output messages are wrapped first in input and output classes, which are then
Figure 1. An example WSDL fragment of a simple add web service together with the created SILan model and the corresponding UML class diagram (© 2008 Á. Hajnal, T. Kifor, Luckácsy, L.Z. Varga, Used with Permission)
977
Web Services as XML Data Sources in Enterprise Information Integration
connected by the operation association. In the case of document style binding no such problem occurs, since these classes are already created at processing the schema. A single element is allowed in message definitions that refer to them. Web services not conforming to WS-I Basic Profile, using encoded messaging style, WSDL arrays, non-SOAP protocols, etc., need further workaround, which is omitted here for clarity considerations.
Querying Web services through sINtAGMA SILan query language is an object oriented query language designed to formulate queries over SINTAGMA models. The syntax is similar to SQL used at relational databases: a SILan query is composed of SELECT, FROM, WHERE parts. The SELECT keyword is followed by a comma separated list of class attributes (columns) of interest, FROM part enumerates classes (tables) whose instances (rows) we search among, and WHERE part specifies constraints that must be satisfied by all the instances in the result. On the other hand, SILan is an object oriented language that relies on UML modeling, and SILan also supports OCL expressions in queries by which we can specify navigations through objects. In contrast to relational databases functional data sources require input to populate the “database” with data before the query can actually be executed. In the case of web services, input includes the name of the web service operation and the input parameters of the operation. When models representing web services are queried in SINTAGMA, the web service operation must be given in the FROM part as the association representing the operation. For example, the web service operation called addOperation of the example in Figure 1 is queried by the construct below (relevant parts are highlighted in boldface characters) (see Box 1).
978
Box 1. SELECT addOperation.AddResponse.result, addOperation.AddResponse.Details.time FROM addOperation WHERE addOperation.AddOperation.op1=1 AND addOperation.AddOperation.op2=2
Operation’s input parameters are given by constraints in the WHERE part of the query. Constraints use the ‘=’ operator, in the form of Class.field=value, which have value assignment semantics with respect to the input parameters. The ‘.’ operator is used to refer to a class attribute in SILan, but it is also used to navigate along associations or compositions. For example, Class1. association1.field1 denotes attribute field1 in the class referred by association1 in Class1. Navigation is used to assign values to input parameters starting from the association representing the operation. In the case of several input parameters, the list of assignments is separated by the AND logical operator. This way, arbitrary complex web service inputs can be formulated. For example, input parameters op1, op2 of addOperation are given values by the query shown in Box 2. Queries for models representing web services are executed by the Web Service Wrapper component. The passed query is parsed first, then the appropriate SOAP message is constructed and sent to the web service provider. Starting from the association in the query and collecting all the constraints for the input side (navigations towards the input end) an XML tree is constructed that combines all the web service inputs. Navigations are represented by wrapper XML elements, and attribute constraints are represented by simple
Box 2. SELECT addOperation.AddResponse.result, addOperation.AddResponse.Details.time FROM addOperation WHERE addOperation.AddOperation.op1=1 AND addOperation.AddOperation.op2=2
Web Services as XML Data Sources in Enterprise Information Integration
XML elements with content corresponding to the constant value. Navigations and constraints that refer to the same instance are unified in the XML tree. The namespace of the XML fragment is set accordingly to the namespace metadata in the model (targetNamespace of the schema) that is then wrapped in an appropriate SOAP envelope. The input SOAP message composed for the query of the addOperation is shown below: 1 2 When the SOAP message is sent to the internet location of the web service (stored as metadata in the model), the requested operation will be executed by the web service provider. Results are sent back to the Web Service Wrapper component as another SOAP message, and, unless SOAP fault (e.g. missing input parameters) or no response errors occur, a temporary internal “database” is populated with data. The internal database is set up by mapping the content of the answer SOAP message to the model. First, the XML document root is added as instance to the class representing operation’s output, then child nodes are processed recursively considering the model schema: XML sub-elements are added as attributes of the current instance, if they are simple, or added as new instances in the corresponding classes, respectively, if they are complex. Intuitively, it means that a new row is created for the root element in the table of operation output, and field values are obtained by iterating through all child elements. If the name of the child element corresponds to a class attribute (simple type), the value of the field is given by the
content of the XML element. If the child element corresponds to a composition (complex type), the child node is processed recursively (considering the referred class), and a relation is created from the current row to the new row in another table representing the child node. Class attributes for which no appropriate child element can be found are set to null. The textual content of XML elements are converted to the proper attribute type at filling field values. The input SOAP message content sent to the web service provider previously is also loaded into the internal database in the same way. An example answer SOAP message and the associated classes, attributes are shown in Box 3. The query specifies the set of classes and attributes of interest in the FROM and SELECT parts. Note that in SILan it is allowed to query associations as well as compositions in the FROM part, and give navigations to attributes in the SELECT part. The WHERE part declares constraints for the instances by constant constraints, where class attributes are compared to constant values using relational operators, or by association constraints that must hold between instances. The Web Service Wrapper component, in the knowledge of the temporary internal database, can execute the query similarly to an SQL engine. Basically, the result is constructed by taking the Cartesian product of the instances of the relevant classes (listed in the FROM part). Constraints in the WHERE part are checked for each n-tuple of instances, and if all of them are satisfied, the
Box 3. ◄ instance in class addResponse 3 ◄ attribute result of the addResponse instance ◄ instance in class details 0.01s ◄ attribute time of the details instance
979
Web Services as XML Data Sources in Enterprise Information Integration
selected attributes (SELECT part) are added to the result. The result of the query for the add web service operation contains a single row with field result containing integer value 3, and field time containing the string representing the execution time of the operation, e.g. 0.01s.
sample Web service Models in sINtAGMA We have implemented the Web Service Wrapper component for SINTAGMA and applied it to several web services ranging from simple ones (such as Google SOAP Search API providing three operations) to complex ones (such as Amazon ECommerce Service providing over 30 operations with transactions). After entering the URL of the WSDL the model of the web service is built automatically. It can be viewed, browsed in a graphical user interface, and queries can be composed for the model. SILan abstractions can be created by which web services can participate in integration scenarios. Namely, the web service model can be connected to other models representing different data sources, for example other web services or relational databases.
Queries for the model are executed transparently by the wrapper that communicates with the web service using SOAP. Necessary inputs are obtained from the query, the appropriate request message is constructed, and sent to the web service provider automatically. Result data are extracted from the answer message, and returned to SINTAGMA. An example screenshot of the SINTAGMA system is shown in Figure 2, where Amazon’s web service is queried.
DIGItAL LIbrAry DEMONstrAtION APPLIcAtION In the previous section we have seen how a single web service can be modeled and queried in SINTAGMA. In this section we show a digital library application that demonstrates the integration of web services with the help of SINTAGMA. The digital library application is an OpenURL resolver application developed in the SINTAGMA project. OpenURL8 is a NISO standard9 for identifying documents with different types of metadata (for example author, title, ISBN,
Figure 2. Amazon’s web service is queried in SINTAGMA (© 2008 Á. Hajnal, T. Kifor, Luckácsy, L.Z. Varga, Used with Permission)
980
Web Services as XML Data Sources in Enterprise Information Integration
ISSN) using the URL format. The following is a sample OpenURL: http://viola.oszk.hu:8080/ sokk/OpenURL_Servlet?sid=OSZK:LibriVisio n&genre=book&aufirst=Jeno&aulast=Rejto&is bn=963-13-5374-5&title=Quarantine%20in%20 the%20Grand%20Hotel&date=2005 The first part of the URL is a link resolver, in the above example viola.oszk.hu:8080/sokk/ OpenURL_Servlet. The other part contains the metadata of the documents, in the above example parameters like the first and last name of the author, the ISBN code and the title of the book. The same OpenURL can be created by several sources (for example in articles containing citations from the identified document or in a document meta database like The European Library10). In our demonstration application the query OpenURL is created by a web based user interface as shown on Figure 3. The document identified by the OpenURL can be located in several target places. In our demonstration application, as shown on Figure 3, the target places are the Amazon book store, which is a web service information source providing XML data services, and the Hungarian National Library, which contains a relational database. The OpenURL resolver has to search in several places using different protocols and different data model for the different possible targets of the document, therefore our OpenURL resolver application uses the different wrappers of SINTAGMA to integrate the different data models of the different protocols. SINTAGMA executes the queries for the targets and collects the result into a single unified model. The OpenURL resolver queries this unified model only and does not need to know the details of the lower level protocols and data models. As long as the OpenURL model remains the same, the OpenURL resolver does not need to be changed even if the protocol or the lower level data model changes. We have created an OpenURL unified conceptual model in SINTAGMA (upper model)
manually and the local application models (lower models) for the targets (in our case for Amazon and for the Hungarian National Library) using the wrappers. Then we have created abstractions (relations) between the upper model and the lower models (one for each data source). The OpenURL resolver application queries the upper model and sees it as one single target. If there is a new target we only have to generate a new lower model using the SINTAGMA wrapper and create an abstraction between the new model and the upper model. If the protocol or the data model of the target changes we only have to regenerate the lower model and modify the old mapping (or create a new one) between the lower and the upper model. We do not have to modify, recompile and redeploy the source code of the client application. First we created the following upper (conceptual) model of the openURL resolver in SILAN (see Box 4). This model is simple because it contains an OpenURLDescription, which contains the paramFigure 3. Digital library demonstration application architecture (© 2008 Á. Hajnal, T. Kifor, Luckácsy, L.Z. Varga, Used with Permission)
981
Web Services as XML Data Sources in Enterprise Information Integration
Box 4. model OpenURLModel { class OpenURL { attribute Integer id; }; class OpenURLDescription { attribute Integer id; attribute Integer openURL; attribute String auFirst; attribute String auLast; attribute String title; attribute String issn; attribute String isbn; attribute String pubDate; }; class SearchResult { attribute Integer id; attribute Integer openURLDescription; attribute String source; attribute String author; attribute String title; attribute Integer stock; attribute String link; attribute String price; }; association AnswerOfQuestion { connection::OpenURLModel::OpenURLDescription as inp; connection::OpenURLModel::SearchResult [1..1] as outp navigable ; }; };
eters of the query OpenURL, a SearchResult, which contains the parameters of the search result, and an AnswerOfQuestion, which connects the OpenURL query with the search result. The resolver queries this model only and does not know about the different data models below this model. Then we generated the lower model of Amazon and Hungarian National Library. We only had to pass the WSDL of the applications to the SINTAGMA wrapper and it created the models automatically. These models are very complex and not shown here, because the data models of the Amazon and the Zing SRW (the web service used by Hungarian National Library) web services are complex. The next step was to create two abstractions. The first abstraction is between the application model of Amazon and the conceptual model of OpenURL. This abstraction connects the corre-
982
sponding elements in the Amazon model and the OpenURL, because there are direct connections between the elements, except the author name, where the Amazon author name is the concatenation of the first and last name of the author in the OpenURL model (see Box 5). The second abstraction is between the application model of the Zing SRW service of the Hungarian National Library and the conceptual model of OpenURL. This abstraction is again a direct mapping between the corresponding elements (see Box 6). The last step was to query the conceptual model of OpenURL from our Open URL resolver. We used the Java library of the distributed SINTAGMA system to create the SINTAGMA query. The following is the SINTAGMA query of the sample OpenURL mentioned at the beginning of this section (see Box 7). Creating models and mappings between the models do not need a programmer, only a knowledge engineer who knows the business area (library system in our case). This is possible because SINTAGMA raised the problem to a higher abstraction level and the OpenURL resolver can use
Box 5. map bundle Amazon_OpenURLModel between Amazon and OpenURLModel { abstraction nev3 (isop: Amazon::ItemSearch_OPERATION, isr: Amazon::ItemSearchRequest, items: Amazon::Items, item: Amazon::Item -> ourl: OpenURLModel::OpenURL) { constraint isop.itemSearch.Request = isr and isop.ItemSearch.Items = items and items.Item = item and isop.itemSearch.SubscriptionId = “0W2KPT35SFFX0RVEK002” and isr.ResponseGroup = “Small” and isr.SearchIndex = “Books” implies isr.Author = ourl.auLast.concat(“ “.concat(ourl.auFirst)) and ourl.result_isbn = item.ASIN and ourl.origin = “Amazon”; };
Web Services as XML Data Sources in Enterprise Information Integration
Box 6. map bundle Zing_OpenURLModel between Zing and OpenURLModel { abstraction nev1 (op: Zing::SearchRetrieveOperation_OPERATION, in0: Zing::searchRetrieveRequestType -> out: OpenURLModel::OpenURL) { constraint op.searchRetrieveRequestType = in0 and in0.maximumRecords = “10” and in0.version = “1.1” and in0.recordSchema = “dc” implies let d = op.SearchRetrieveOperation.records.record.recordData.toMap in out.result_title = (String)d.get(“title”) AND in0.query = out.query; }; };
Box 7. select openURLquery.outp.source, openURLquery.outp.author, openURLquery.outp.title, openURLquery.outp.stock, openURLquery.outp.link, openURLquery.outp.price from openURLquery: OpenURLModel:: AnswerOfQuestion where openURLquery.inp.auFirst.contains(\”Jeno\”) and openURLquery.inp.auLast.contains(\”Rejto\”) and openURLquery.inp.title.contains(\”Quarantine%20in%20 the%20Grand% 20Hotel\”) and openURLquery.inp.isbn.contains(\”963-13-5374-5\”) and openURLquery.inp.puDate.contains(\”2005\”)
always the SINTAGMA query to resolve the OpenURL expression to any target.
code of the querying program does not have to be changed. Creating the models and mappings between the models does not need a programmer but a knowledge engineer who can focus on the business area and logic.
cONcLUsION
AcKNOWLEDGMENt
This chapter presented how XML data provided by web services and relational data can be integrated. The main tool to integrate web services with relational data is the Web Service Wrapper component of the SINTAGMA Enterprise Information Integration system. This component makes easy the integration of XML data services with relational databases, because the data model of web services is automatically created by the Web Service Wrapper of the SINTAGMA system. Based on these application level data models a knowledge engineer can create a unified conceptual model of all data sources, as well as abstract mapping between the unified conceptual model and the application level models. Then the conceptual model can be queried from the SINTAGMA system which hides the diversity of different data sources. The set of data sources can be extended easily by generating the application level data model for the new data source and creating an abstraction between the new application model and the existing conceptual model. The source
The authors acknowledge the support of the Hungarian NKFP programme of the SINTAGMA project under grant number 2/052/2004. We also would like to thank all the people participating in this project.
rEFErENcEs Benkő, T., Lukácsy, G., Fokt, A., Szeredi, P., Kilián, I., & Krauth, P. (2003). Information Integration through Reasoning on Meta-data. Proceeding of the workshop, AI moves to IA’’, IJCAI 2003, Acapulco, Mexico, (pp. 65-77). Calvanese, D., De Giacomo, G., Lenzerini, M., Nardi, D., & Rosati, R. (1998). Description Logic Framework for Information Integration, Principles of Knowledge Representation and Reasoning, (pp. 2-13).
983
Web Services as XML Data Sources in Enterprise Information Integration
Clark, T., & Warmer, J. (2002). Object Modeling with the OCL: The rationale behind the Object Constraint Language. Springer. Fowler, M., & Scott, K. (1998). UML Distilled: Applying the Standrad Object Modeling Language. Addison-Wesley. Grigorova, V. (2006). Semantic Description of Web Services and Possibilities of BPEL4WS. Information Theories and Application, 13, 183–187. Horrocks, I. (2002). Reasoning with Expressive Description Logics: Theory and Practice. In Proceeding of the 18th International Conference on Automated Deduction (CADE 2002) (pp. 1-15). Lukácsy, G., Benkő, T., & Szeredi, P. (2007). Towards Automatic Semantic integration. In 3rd International Conference of Interoperability for Enterprise Software and Applications (I-ESA 2007). Varga, L. Z., Hajnal, A., & Werner, Z. (2004). An Agent Based Approach for Migrating Web Services to Semantic Web Services. Lecture Notes in Computer Science Vol. 3192, Springer-Verlag GmbH, Heidelberg, Germany. In C. Bussler & D. Fensel (Eds.), Artificial Intelligence: Methodology, Systems, and Applications 11th International Conference, AIMSA 2004, Varna, Bulgaria, September 2-4, 2004, Proceedings, (pp. 371-380). ISBN-3-540-22959-0. Varga, L. Z., Hajnal, Á., & Werner, Z. (2005). The WSDL2Agent Tool. In R. Unland, M. Klusch, & M. Calisti (Eds.), Software Agent-Based Applications, Platforms and Development Kits. Whitestein Series in Software Agent Technologies, (pp. 197223). Viaduktstrasse 42, CH-4051 Basel, Switzerland, Springer Group, ISBN 3-7643-7347-4, 2005.
Vasiliu, L., Harand, S., & Cimpian, E. (2004). The DIP Project: Enabling Systems & Solutions For Processing Digital Content With Semantic Web Services. EWIMT 2004 European Workshop on the Integration of Knowledge, Semantics and Digital Media Technology.
ADDItIONAL rEADING Apps, A., & MacIntyre, R. (2006). Why OpenURL? D-Lib Magazine, 12(5). doi:10.1045/ may2006-apps Baader, F., Calvanese, D., McGuinness, D., Nardi, D., & Patel-Schneider, P. (2003). The Description Logic Handbook, Theory, Implementation and Applications. Cambridge University Press. Booch, G., Jacobson, I., & Rumbaugh, J. (1999). The Unified Modeling Language User Guide. Addison-Wesley. Cerami, E. (2002). Web Services Essentials. O’Reilly & Associates. Chawathe, S., Garcia-molina, H., Hammer, J., Irel, K., Papakonstantinou, Y., Ullman, J., & Widom, J. (1994). The Tsimmis Project: Integration of Heterogeneous Information Sources. Proceedings of IPSJ Conference. (pp. 7-18). Hepp, M., Leymann, F., Domingue, J., Wahler, A., & Fensel, D. (2005). Semantic Business Process Management: A Vision Towards Using Semantic Web Services for Business Process Management. Proceedings of the IEEE International Conference on e-Business Engineering (ICEBE 2005), (pp. 535-540). IEEE Computer Society. Hodgson, C. (2005). Understanding the OpenURL Framework. NISO Information Standards Quarterly, 17(3), 1–4. Kline, K., & Kline, D. (2001). SQL in a Nutshell. O’Reilly & Associates.
984
Web Services as XML Data Sources in Enterprise Information Integration
Lukácsy, G., & Szeredi, P. (2008). Combining Description Logics and Object Oriented Models in an Information Framework. Periodica Polytechnica. Polleres, A., Pearce, D., Heymans, S., & Ruckhaus, E. (2007). Proceedings of the 2nd International Workshop on Applications of Logic Programming to the Web, Semantic Web and Semantic Web Services (ALPSWS2007). CEUR Workshop Proceedings, Vol. 287, http://ceur-ws.org/Vol-287. T. Ray, E. (2001). Learning XML. O’Reilly & Associates. Ricardo, J. G., Müller, J. P., Mertins, K., & Zelm, M. (2007). Enterprise Interoperability II: New Challenges and Approaches, Proceedings of the 3rd International Conference on Interoperability for Enterprise Software and Applications (IESA07). Springer Verlag. St. Laurent, S., Johnston, J., & Dumbill, E. (2001). Programming Web Services with XML-RPC. O’Reilly & Associates. Studer, R., Grimm, S., & Abecker, A. (Eds.). (2007). Semantic Web Services. Springer. Van der Vlist, E. (2002). XML Schema. O’Reilly & Associates.
ENDNOtEs 1
2
3
4
5
6
7
8
9
10
Web Services Architecture, W3C Working Group Note 11 February 2004. http://www. w3.org/TR/2004/NOTE-ws-arch-20040211 SINTAGMA Enterprise Information Integration System was developed under the Hungarian NKFP programme of the SINTAGMA project. Web page: http://www.sintagma.hu (available in Hungarian language only) Simple Object Access Protocol. http://www. w3.org/TR/2000/NOTE-SOAP-20000508 Web Services Description Language. http://www.w3.org/TR/2001/NOTE-wsdl-20010315 XML Schema. http://www.w3.org/XML/ Schema. Web Services Interoperability Organization. http://www.ws-i.org Object Management Group: The Common Object Request Broker: Architecture and Specification, revision 2, July 1995. http://alcme.oclc.org/openurl/docs/pdf/ openurl-01.pdf http://www.niso.org/standards/standard_detail.cfm?std_id=783 http://www.theeuropeanlibrary.org
Walmsley, P. (2007). XQuery. O’Reilly Media. Walsh, A. E. (2002). UDDI, SOAP, and WSDL: The web services specification reference book. Pearson Education. This work was previously published in Services and Business Computing Solutions with XML: Applications for Quality Management and Best Processes, edited by Patrick Hung, pp. 82-97, copyright 2009 by Business Science Reference (an imprint of IGI Global).
985
986
Chapter 4.6
System-of-Systems Cost Estimation:
Analysis of Lead System Integrator Engineering Activities Jo Ann Lane University of Southern California, USA Barry Boehm University of Southern California, USA
AbstrAct As organizations strive to expand system capabilities through the development of system-ofsystems (SoS) architectures, they want to know “how much effort” and “how long” to implement the SoS. In order to answer these questions, it is important to first understand the types of activities performed in SoS architecture development and integration and how these vary across different SoS implementations. This article provides results of research conducted to determine types of SoS lead system integrator (LSI) activities and how these differ from the more traditional system
engineering activities described in Electronic Industries Alliance (EIA) 632 (“Processes for Engineering a System”). This research further analyzed effort and schedule issues on “very large” SoS programs to more clearly identify and profile the types of activities performed by the typical LSI and to determine organizational characteristics that significantly impact overall success and productivity of the LSI effort. The results of this effort have been captured in a reduced-parameter version of the constructive SoS integration cost model (COSOSIMO) that estimates LSI SoS engineering (SoSE) effort.
Copyright © 2011, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
System-of-Systems Cost Estimation
INtrODUctION As organizations strive to expand system capabilities through the development of system-of-systems (SoS) architectures, they want to know “how much effort” and “how long” to implement the SoS. Efforts are currently underway at the University of Southern California (USC) Center for Systems and Software Engineering (CSSE) to develop a cost model to estimate the effort associated with SoS lead system integrator (LSI) activities. The research described in this article is in support of the development of this cost model, the constructive SoS integration cost model (COSOSIMO). Research conducted to date in this area has focused more on technical characteristics of the SoS. However, feedback from USC CSSE industry affiliates indicates that the extreme complexity typically associated with SoS architectures and political issues between participating organizations have a major impact on the LSI effort. This is also supported by surveys of system acquisition managers (Blanchette, 2005) and studies of failed programs (Pressman & Wildavsky, 1973). The focus of this current research is to further investigate effort and schedule issues on “very large” SoS programs and to determine key activities in the development of SoSs and organizational characteristics that significantly impact overall success and productivity of the program. This article first describes the context for the COSOSIMO cost model, then presents a conceptual view of the cost model that has been developed using expert judgment, describes the methodology being used to develop the model, and summarizes conclusions reached to date.
cOsOsIMO cONtEXt We are seeing a growing trend in industry and the government agencies to “quickly” incorporate new technologies and expand the capabilities of legacy systems by integrating them with
other legacy systems, commercial-off-the-shelf (COTS) products, and new systems into a system of systems, generally with the intent to share information from related systems and to create new, emergent capabilities that are not possible with the existing stove-piped systems. With this development approach, we see new activities being performed to define the new architecture, identify sources to either supply or develop the required components, and then to integrate and test these high level components. Along with this “system-of-systems” development approach, we have seen a new role in the development process evolve to perform these activities: that of the LSI. A recent Air Force study (United States Air Force Scientific Advisory Board, 2005) clearly states that the SoS Engineering (SoSE) effort and focus related to LSI activities is considerably different from the more traditional system development projects. According to this report, key areas where LSI activities are more complex or different than traditional systems engineering are the system architecting, especially in the areas of system interoperability and system “ilities;” acquisition and management; and anticipation of needs. Key to developing a cost model such as COSOSIMO is understanding what a “system-of-systems” is. Early literature research (Jamshidi, 2005) showed that the term “system-of-systems” can mean many things across different organizations. For the purposes of the COSOSIMO cost model development, the research team has focused on the SoS definitions provided in Maier (1999) and Sage and Cuppan (2001): an evolutionary net-centric architecture that allows geographically distributed component systems to exchange information and perform tasks within the framework that they are not capable of performing on their own outside of the framework. This is often referred to as “emergent behaviors.” Key issues in developing an SoS are the security of information shared between the various component systems, how to get the right information to the right destinations efficiently without overwhelming users with un-
987
System-of-Systems Cost Estimation
Figure 1. COSOSIMO model structure size Drivers
sos Definition and Integration Effort
cOsOsIMO
cost Drivers
calibration
necessary or obsolete information, and how to maintain dynamic networks so that component system “nodes” can enter and leave the SoS. Today, there are fairly mature tools to support the estimation of the effort and schedule associated with the lower-level SoS component systems (Boehm, Valerdi, Lane, & Brown 2005). However, none of these models supports the estimation of LSI SoSE activities. COSOSIMO, shown in Figure 1, is a parametric model currently under development to compute just this effort. The goal is to support activities for estimating the LSI effort in a way that allows users to develop initial estimates and then conduct tradeoffs based on architecture and development process alternatives.
Recent LSI research conducted by reviewing LSI statements of work identifies the following typical LSI activities: •
Concurrent engineering of requirements, architecture, and plans Identification and evaluation of technologies to be integrated Source selection of vendors and suppliers Management and coordination of supplier activities Validation and feasibility assessment of SoS architecture Continual integration and test of SoS-level capabilities SoS-level implementation planning, preparation, and execution
• • • • • •
Figure 2. Architecture-based SoS cost estimation Level 0 Level 1
Level 2
s11
sOs
s1
s12
s2
s1n
s21
s22
Activity
988
sm
s2n
sm1
sm2
Levels
smn cost Model
SoS Lead System Integrator Effort (SoS scoping, planning, requirements, architecting; source selection; teambuilding, rearchitecting, feasibility assurance with selected suppliers; incremental acquisition management; SoS integration and test; transition planning, preparation, and execution; and continuous change, risk, and opportunity management)
Level 0, and other levels if lower level systems components are also SoSs
COSOSIMO
Development of SoS Software-Intensive Infrastructure and Integration Tools
Level 0
COCOMO II
System Engineering for SoS Components
Levels 1-n
COSYSMO
Software Development for Software -Intensive Components
Levels 1-n
COCOMO II
COTS Assessment and Integration for COTS-based Components
Levels 1-n
COCOTS
System-of-Systems Cost Estimation
•
On-going change management at the SoS level and across the SoS-related integrated product teams to support the evolution of requirements, interfaces and technology.
With the addition of this new cost model to the constructive cost model (COCOMO) suite of cost models, one can easily develop more comprehensive estimates for the total SoS development, as shown in Figure 2.
LsI EFFOrt EstIMAtION APPrOAcH As mentioned above, key to an LSI effort estimation model is having a clear understanding of the SoSE activities performed by the organization as well as which activities require the most effort. In addition, it is important to understand how these SoSE activities differ from the more traditional systems engineering activities. Analysis presented in Lane (2005) describes how the typical LSI SoSE activities differ from the more traditional system engineering activities identified in EIA 632 (Electronic Industries Alliance, 1999) and the Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI) (Software Engineering Institute, 2001). Subsequently, Delphi surveys conducted with USC CSSE industry
affiliates have identified key size drivers and cost drivers for LSI effort and are shown in Table 1. Because there are concerns about the availability of effort data from a sufficient number of SoS programs to support model calibration and validation, current efforts are focusing on defining a “reduced parameter set” cost model or ways to estimate parts of the LSI effort using fewer, but more specific, parameters. The following paragraphs present the results of this recent research. Further observations of LSI organizations indicate that the LSI activities can be grouped into three areas: 1) planning, requirements management, and architecting (PRA), 2) source selection and supplier oversight (SS), and 3) SoS integration and testing (I&T). There are typically different parts of the LSI organization that are responsible for these three areas. Figure 3 illustrates, conceptually, how efforts for these three areas are distributed across the SoS development life cycle phases of inception, elaboration, construction, and transition for a given increment or evolution of SoS development. Planning, requirements, and architecting begin early in the life cycle. As the requirements are refined and the SoS architecture is defined and matured, source selection activities can begin to identify component system vendors and to issue contracts to incorporate the necessary SoS-enabling capabilities. With a mature SoS architecture
Table 1. COSOSIMO cost model parameters Size Drivers
• # SoS-related requirements • # SoS interface protocols • # independent component system organizations • # SoS scenarios • # unique component systems
Cost Drivers • Requirements understanding • Architecture maturity • Level of service requirements • Stakeholder team cohesion • SoS team capability • Maturity of LSI processes • Tool support • Cost/schedule compatibility • SoS Risk Resolution • Component system maturity and stability • Component system readiness
989
System-of-Systems Cost Estimation
Figure 3. Conceptual LSI effort profile
and the identification of a set of component systems for the current increment, the integration team can begin the integration and test planning activities. Once an area ramps up, it continues through the transition phase at some nominal level to ensure as smooth a transition as possible and to capture lessons learned to support activities and plans for the next increment. Boehm and Lane (2006) describe how some of these activities directly support the current plan-driven SoS development effort while others are more agile, forward looking, trying to anticipate and resolve problems before they become huge impacts. The goal is to stabilize development for the current increment while deferring as much change as possible to future increments. For example, the planning/requirements/architecture group continues to manage the requirements change traffic that seems to be so common in these large systems, only applying those changes to the current increment that are absolutely necessary and deferring the rest to future increments. The architecture team also monitors current increment activities in order to make necessary adjustments to the architecture to handle cross-cutting technology issues that arise during the component system supplier construction activities. Likewise, the supplier oversight group continues to monitor the suppliers for risks, cost, and schedule issues that arise out of SoS conflicts with the component system stakeholder
990
needs and desires. As the effort ramps down in the transition phase, efforts are typically ramping up for the next increment or evolution. By decomposing the COSOSIMO cost model into three components that correspond to the three primary areas of LSI SoSE effort, the parameter set for each COSOSIMO component can be reduced from the full set and the applicable cost drivers made more specific to the target area. Table 2 shows the resulting set of size and cost drivers for each of the three primary areas. This approach allows the model developers to calibrate and validate the model components with fewer parameters and data sets. It also allows the collection of data sets from organizations that are only responsible for a part of the LSI SoSE activities. Finally, this approach to LSI SoSE effort estimation allows the cost model to provide estimates for the three areas, as well as a total estimate—a key request from USC CSSE industry affiliates supporting this research effort. Detailed definitions and proposed ratings for these parameters may be found in Lane (2006). The following provides a brief description of each of the COSOSIMO parameters. Note that several of the COSOSIMO parameters are similar to those defined for the constructive systems engineering cost model (COSYSMO) and are identified in the descriptions below.
System-of-Systems Cost Estimation
Table 2. COSOSIMO parameters by SoSE area COSOSIMO Component
PRA
SS
I&T
Associated Size Drivers
• # SoS-related requirements • # SoS interface protocols
• # independent component system organizations
• # SoS interface protocols • # SoS scenarios • # unique component systems
cOsOsIMO size Drivers Number of SoS-Related Requirements1 This driver represents the number of requirements for the SoS of interest at the SoS level. Requirements may be functional, performance, feature, or service-oriented in nature depending on the methodology used for specification. They may also be defined by the customer or contractor. SoS requirements can typically be quantified by counting the number of applicable shalls, wills, shoulds, and mays in the SoS or marketing specification. Note: Some work may be required to decompose requirements to a consistent level so that they may be counted accurately for the appropriate SoS-of-interest.
Associated Cost Drivers • • • • • • •
Requirements understanding Level of service requirements Stakeholder team cohesion SoS PRA capability Maturity of LSI PRA processes PRA tool support Cost/schedule compatibility with PRA processes • SoS PRA risk resolution • • • • • • •
Requirements understanding Architecture maturity Level of service requirements SoS SS capability Maturity of LSI SS processes SS tool support Cost/schedule compatibility with SS activities • SoS SS risk resolution • • • • • • •
Requirements understanding Architecture maturity Level of service requirements SoS I&T capability Maturity of LSI I&T processes I&T tool support Cost/schedule compatibility with I&T activities • SoS I&T risk resolution • Component system maturity and stability • Component system readiness
framework. Note: This does NOT include interfaces internal to the SoS component systems, but it does include interfaces external to the SoS and between the SoS component systems. Also note that this is not a count of total interfaces (in many SoSs, the total number of interfaces may be very dynamic as component systems come and go in the SoS environment —in addition, there may be multiple instances of a given type of component system), but rather a count of distinct protocols at the SoS level.
Number of Independent Component System Organizations The number of organizations managed by the LSI that are providing SoS component systems.
Number of SoS Interface Protocols
Number of Operational Scenarios1
The number of distinct net-centric interface protocols to be provided/supported by the SoS
This driver represents the number of operational scenarios that an SoS must satisfy. Such scenarios 991
System-of-Systems Cost Estimation
include both the nominal stimulus-response thread plus all of the off-nominal threads resulting from bad or missing data, unavailable processes, network connections, or other exception-handling cases. The number of scenarios can typically be quantified by counting the number of SoS states, modes, and configurations defined in the SoS concept of operations or by counting the number of “sea-level” use cases (Cockburn, 2001), including off-nominal extensions, developed as part of the operational architecture.
Number of Unique Component Systems The number of types of component systems that are planned to operate within the SoS framework. If there are multiple versions of a given type that have different interfaces, then the different versions should also be included in the count of component systems.
cOsOsIMO cost Drivers Requirements Understanding1 This cost driver rates the level of understanding of the SoS requirements by all of the affected organizations. For the PRA sub-model, it includes the PRA team as well as the SoS customers and sponsors, SoS PRA team members, component system owners, users, and so forth. For the SS sub-model, it is the understanding level between the LSI and the component system suppliers/ vendors. For the I&T sub-model, it is the level of understanding between all of the SoS stakeholders with emphasis on the SoS I&T team members.
Level of Service Requirements1 This cost driver rates the difficulty and criticality of satisfying the ensemble of level of service requirements or key performance parameters (KPPs), such as security, safety, transaction speed,
992
communication latency, interoperability, flexibility/adaptability, and reliability. This parameter should be evaluated with respect to the scope of the sub-model to which it pertains.
Team Cohesion1 Represents a multi-attribute parameter, which includes leadership, shared vision, diversity of stakeholders, approval cycles, group dynamics, integrated product team (IPT) framework, team dynamics, trust, and amount of change in responsibilities. It further represents the heterogeneity in stakeholder community of the end users, customers, implementers, and development team. For each sub-model, this parameter should be evaluated with respect to the appropriate LSI team (e.g., PRA, SS, or I&T).
Team Capability Represents the anticipated level of team cooperation and cohesion, personnel capability, and continuity, as well as LSI personnel experience with the relevant domains, applications, language, and tools. For each sub-model, this parameter should be evaluated with respect to the appropriate LSI team (e.g., PRA, SS, or I&T).
Process Maturity A parameter that rates the maturity level and completeness of the LSI’s processes and plans. For each sub-model, this parameter should be evaluated with respect to the appropriate LSI team processes (e.g., PRA, SS, or I&T).
Tool Support1 Indicates the coverage, integration, and maturity of the tools in the SoS engineering and management environments. For each sub-model, this parameter should be evaluated with respect to
System-of-Systems Cost Estimation
the tool support available to appropriate LSI team (e.g., PRA, SS, or I&T).
Cost/Schedule Compatibility The extent of business or political pressures to reduce the cost and schedule associated with the LSI’s activities and processes. For each sub-model, this parameter should be evaluated with respect to the cost/schedule compatibility for appropriate LSI team activities (e.g., PRA, SS, or I&T).
Risk Resolution A multi-attribute parameter that represents the number of major SoS/LSI risk items, the maturity of the associated risk management and mitigation plan, compatibility of schedules and budgets, expert availability, tool support, and level of uncertainty in the risk areas. For each sub-model, this parameter should be evaluated with respect to the risk resolution activities for the associated LSI team (e.g., PRA, SS, or I&T).
Architecture Maturity A parameter that represents the level of maturity of the SoS architecture. It includes the level of detail of the interface protocols and the level of understanding of the performance of the proto-
cols in the SoS framework. Two COSOSIMO sub-models use this parameter, and it should be evaluated in each case with respect to the LSI activities covered by the sub-model of interest.
Component System Maturity and Stability A multi-attribute parameter that indicates the maturity level of the component systems (number of new component systems versus number of component systems currently operational in other environments), overall compatibility of the component systems with each other and the SoS interface protocols, the number of major component system changes being implemented in parallel with the SoS framework changes, and the anticipated change in the component systems during SoS integration activities.
Component System Readiness This indicates readiness of component systems for integration. User evaluates level of verification and validation (V&V) that has/will be performed prior to integration and the level of subsystem integration activities that will be performed prior to integration into the SoS integration lab.
Figure 4. USC CSE cost model development methodology Analyze existing literature Step 1
Concurrency and feedback implied...
Perform Behavioral analyses Identify relative Step 2 significance Perform expert-judgment Step 3 Delphi assessment, formulate a-priori model Step 4 Gather project data Determine Bayesian A-Posteriori model Step 5 Step 6 Gather more data; refine model Step 7
993
System-of-Systems Cost Estimation
cOsOsIMO cOst MODEL DEVELOPMENt MEtHODOLOGy The COSOSIMO cost model is being developed using the proven cost model development methodology developed over the last several years at the USC CSSE. This methodology, described in (Boehm, Abts, Brown, Chulani, Clark, & et al., 2002), is illustrated in Figure 4. For COSOSIMO, the literature review has focused on the definitions of SoSs and SoSE; the role and scope of activities typically performed by LSIs; and analysis of cost factors used in related software, systems engineering, and COTS integration cost models, as well as related system dynamics models that investigate candidate SoSE cost factors. The behavioral analyses determine the potential range of values for the candidate cost drivers and the relative impact that each has on the overall effort associated with the relevant SOSE activities. For example, if the stakeholder team cohesion is very high, what is the impact on the PRA effort? Likewise, if the stakeholder team cohesion is very low, what is the resulting impact on PRA effort? The results of the behavioral analyses are then used to develop a preliminary model form. The parameters include a set of one or more size drivers, a set of exponential scale factors, and a set of effort multipliers. Cost drivers that are related to economies/diseconomies of scale as size is increased are combined into an exponential factor. Other cost drivers that have a more linear behavior with respect to size drivers are combined into an effort multiplier. Next, the model parameters, definitions, range of values, rating scales, and behaviors are reviewed with industry and research experts using a wideband Delphi process. The consensus of the experts is used to update the preliminary model. In addition to expert judgement, actual effort data is collected from successful projects covering the LSI activities of interest. A second model, based on actual data fitting, is then devel-
994
oped. Finally, the expert judgment and actual data models are combined using Bayesian techniques. In this process, more weight is given to expert judgement when actual data is not consistent or sparse, and more weight is given to actual data when the data is fairly consistent and experts do not strongly agree. Since technologies and engineering approaches are constantly evolving, it is important to continue data collection and model analysis and update the model when appropriate. Historically, this has led to parameters related to older technologies being dropped and new parameters added. In the case of COSOSIMO, it will be important to track the evolution of SoS architectures and integration approaches and the development of convergence protocols. For COSOSIMO, each of the sub-models will go through this development process. Once the sub-models are calibrated and validated, they may be combined to estimate the total LSI effort for a proposed SoS development program. To date, several expert judgment surveys have been conducted and actual data collection is in process.
cONcLUsION LSI organizations are realizing that if more traditional processes are used to architect and integrate SoSs, it will take too long and too much effort to find optimal solutions and build them. Preliminary analysis of LSI activities show that while many of the LSI activities are similar to those described in EIA 632 and the SEI’s CMMI, LSIs are identifying ways to combine agile processes with traditional processes to increase concurrency, reduce risk, and further compress overall schedules. In addition, effort profiles for the key LSI activities (the up-front effort associated with SoS abstraction, architecting, source selection, systems acquisition, and supplier and vendor oversight during development, as well as the effort associated with the later activities of
System-of-Systems Cost Estimation
integration, test, and change management) show that the percentage of time spent on key activities differs considerably from the more traditional system engineering efforts. By capturing the effects of these differences in organizational structure and system engineering processes in a reduced parameter version of COSOSIMO, management will have a tool that will better predict LSI SoSE effort and to conduct “what if” comparisons of different development strategies.
rEFErENcEs Blanchette, S. (2005). U.S. Army acquisition – The program executive officer perspective, (Special Report CMU/SEI-2005-SR-002). Pittsburgh, PA: Software Engineering Institute. Boehm, B., Abt, C., Brown, A., Chulani, S., Clark, & et al. (2000). Software cost estimation with COCOMO II. Upper Saddle River, NJ: Prentice Hall. Boehm, B., Valerdi, R., Lane, J., & Brown, A. (2005). COCOMO suite methodology and evolution. CrossTalk, 18(4), 20-25. Boehm, B., & Lane, J. (2006). 21st century processes for acquiring 21st century systems of systems. CrossTalk, 19(5), 4-9. Cockburn, A. (2001). Writing effective use cases. Boston: Addison-Wesley. Electronic Industries Alliance. (1999). EIA Standard 632: Processes for engineering a system. Jamshidi, M. (2005). System-of-systems engineering - A definition. Proceedings of IEEE System, Man, and Cybernetics (SMC) Conference. Retrieved January 29, 2005 from http://ieeesmc2005. unm.edu/SoSE_Defn.htm
Lane, J. (2005). System of systems lead system integrators: Where do they spend their time and what makes them more/less efficient. (Tech. Rep. No. 2005-508.) University of Southern California Center for Systems and Software Engineering, Los Angeles, CA. Lane, J. (2006). COSOSIMO Parameter Definitions. (Tech. Rep. No. 2006-606). University of Southern California Center for Systems and Software Engineering, Los Angeles, CA. Maier, M. (1998). Architecting principles for systems-of-systems. Systems Engineering, 1(4), 267-284. Pressman, J., & Wildavsky, A. (1973). Implementation: How great expectations in Washington are dashed in Oakland. Oakland, CA: University of California Press. Sage, A., and Cuppan, C. (2001). On the systems engineering and management of systems of systems and federations of systems. Information, Knowledge, and Systems Management 2, 325-345. Software Engineering Institute (2001). Capability maturity model integration (CMMI) (Special report CMU/SEI-2002-TR-001). Pittsburgh, PA: Software Engineering Institute. United States Air Force Scientific Advisory Board (2005). Report on system-of-systems engineering for Air Force capability development. (Public Release SAB-TR-05-04). Washington, DC: HQUSAF/SB. Valerdi, R (2005). The constructive systems engineering cost model (COSYSMO). Unpublished doctoral dissertation, University of Southern California, Los Angeles.
995
System-of-Systems Cost Estimation
ENDNOtE 1
Adapted to SoS environment from COSYSMO (Valerdi, 2005).
This work was previously published in Encyclopedia of Emerging Systems Approaches in Information Technologies: Concepts, Theories, and Applications, edited by David Paradice, pp. 204-213, copyright 2010 by Information Science Reference (an imprint of IGI Global).
996
997
Chapter 4.7
Consistency and Modularity in Mediated Service-Based Data Integration Solutions Yaoling Zhu Dublin City University, Ireland Claus Pahl Dublin City University, Ireland
AbstrAct A major aim of the Web service platform is the integration of existing software and information systems. Data integration is a central aspect in this context. Traditional techniques for information and data transformation are, however, not sufficient to provide flexible and automatable data integration solutions for Web service-enabled information systems. The difficulties arise from a high degree of complexity in data structures in many applications and from the additional problem of heterogeneity of data representation in applications that often cross organisational boundaries. The authors present an integration technique that embeds a declarative data transformation technique based DOI: 10.4018/978-1-60566-330-2.ch006
on semantic data models as a mediator service into a Web service-oriented information system architecture. Automation through consistencyoriented semantic data models and flexibility through modular declarative data transformations are the key enablers of the approach.
INtrODUctION A major aim of the Web service platform is the integration of existing software and information systems (Alonso et al., 2004). Information and data integration is a central aspect in this context. Traditional techniques based on XML for data representation and XSLT for transformations between XML documents are not sufficient to provide a flexible and automatable data integra-
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
tion solution for Web service-enabled information systems. Difficulties arise from the high degree of complexity in data structures in many business and technology applications and from the problem of heterogeneity of data representation in applications that cross organisational boundaries. The emergence of the Web services platform and service-oriented architecture (SOA) as an architecture paradigm has provided a unified way to expose the data and functionality of an information system (Stal, 2002). The Web services platform has the potential to solve the problems in the data integration domain such as heterogeneity and interoperability (Orriens, Yang and Papazoglou, 2003; Haller, Cimpian, Mocan, Oren and Bussler, 2005; Zhu et al., 2004). Our contribution is an integration technology framework for Web-enabled information systems comprising of •
•
Firstly, a data integration technique based on semantic, ontology-based data models and the declarative specification of transformation rules and Secondly, a mediator architecture based on information services and the construction of connectors that handle the transformations to implement the integration process.
A data integration technique in the form of a mediator service can dynamically perform transformations based on a unified semantic data model built on top of individual data models in heterogeneous environments (Wiederhold, 1992). Abstraction has been used successfully to address flexibility problems in data processing (Rouvellou, Degenaro, Rasmus, Ehnebuske and McKee, 2000). With recent advances in abstract, declarative XML-based data query and transformation languages (Zhu et al., 2004) and Semantic Web and ontology technology (Daconta, Obrst and Smith, 2003), the respective results are ready to be utilised in the Web application context. The combination of declarative and semantic specification and automated support of architecture
998
implementations provides the necessary flexibility and modularity to deal with complexity and consistency problems. Two central questions to the data integration problem and its automation shall be addressed in this investigation: •
•
How to construct data model transformation rules and how to express these rules in a formal, but also accessible and maintainable way is central. How integration can be facilitated through service composition to enable interoperability through connector and relationship modelling.
We show how ontology-based semantic data models and a specific declarative data query and transformation language called Xcerpt (Bry and Schaffert, 2002) and its execution environment can be combined in order to allow dynamic data transformation and integration. We focus on technical solutions to semantically enhance data modelling and adapt Xcerpt and its support environment so that it can facilitate the dynamic generation of Xcerpt query programs (in response to user requests) from abstract transformation rules.
bAcKGrOUND Information integration is the problem of combining heterogeneous data residing at different sources in order to provide the user with a unified view (Lenzerini, 2002). This view is central in any attempt to adapt services and their underlying data sources to specific client and provider needs. One of the main tasks in information integration is to define the mappings between the individual data sources and the unified view of these sources and vice versa to enable this required adaptation. Figure1 shows two sample schemas, which might represent the views of client and provider on a collection of customers, that require integration.
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
The integration itself can be defined using transformation languages. Information integration has the objective of bringing together different types of data from different sources in order for this data to be accessed, queried, processed and analysed in a uniform manner. Recently, service-based platforms are being used to provide integration solutions. In the Web services context, data in XML representation, which is retrieved from individual Web-based data services, needs to be merged and transformed to meet the integration requirements. Data schema integration cannot be fully automated on a syntactic level since the syntactic representation of schemas and data does not convey the semantics of different data sources. For instance, a customer can be identified in the configuration management repository by a unique customer identifier; or, the same customer may be identified in the problem management repository by a combination of a service support identifier and its geographical location, see Figure 1. Ontology-based semantic data models can rectify this problem by providing an agreed vocabulary of concepts with associated properties.
XSLT is the most widely used XML data integration language, but suffers from some limitations within our context due its is syntactical focus and operational language. •
•
•
Semantics: Only the syntactical integration of query and construction part of a XSLT transformation program is specified, but consistency in terms of the semantics can not be guaranteed. Modularity: XSLT does not support a join or composition operator on XML documents that allows several source XML documents to merged into one before being transformed. Maintainability: XSLT transformations are difficult to write, maintain, and reuse for large-scale information integration. It is difficult to separate the source and target parts of transformation rules as well as the filtering constraints due to its operational character without a separation of query and construction concerns.
Figure 1. Two schema diagrams of the global data model that need to be integrated (© 2008, Claus Pahl. Used with permission).
999
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
Due to these drawbacks, we propose semantic data models and a declarative query and transformation approach providing more expressive power and the ability to automatically generate query and transformation programs as connectors for services-based data integration in Web-enabled information systems. A range of characteristics of XML query and transformation languages beyond XSLT, which have been studied and compared (Jhingran, Mattos and Pirahesh, 2002; Lenzerini, 2002; Peltier, Bezivin, and Guillaume, 2002), led us to choose the fully declarative language Xcerpt (Bry and Schaffert, 2002) as our transformation platform (Zhu, 2007).
are represented as concepts in the ontology. The concept Customer is defined in terms of its properties – data type-like properties such as a name or an identifier and also object type properties such as a collection of services used by a customer. Three concept descriptions, using the existential quantifier “∃” here, express that a customer is linked to an identification through a supportID property, to a name using the custName property, and to services using Services. In some cases, these properties refer to other composite concepts, sometimes they refer to atomic concepts that act as type names here. Technically, the existential quantification means that there exits for instance a name that is a customer name.
DAtA trANsFOrMAtION AND cONNEctOr ArcHItEctUrE
Customer = ∃ supportID . Identification ∧ ∃ custName . Name ∧ ∃ usedServices . Service Service = ∃ custID . ID ∧ ∃ servSystem . System System = ∃ hasPart . Machine
Mappings between data schemas of different participants might or might not represent the same semantical information. The Semantic Web and in particular ontology-based data domain and service models (Daconta et al., 2003) can provide input for improvements of current integration approaches in terms of data modelling and transformation validation by providing a notion of consistency, based on which an automated transformation approach can become reliable (Reynaud, Sirot and Vodislav, 2001, Haller et al., 2005). We define consistency here as the preservation of semantics in transformations.
Information Architecture Ontologies are knowledge representation frameworks that represent knowledge about a domain in terms of concepts and properties of these concepts. We use a description logic notation here, which is the formal foundation of many ontology languages such as OWL (Daconta et al., 2003). Description logic provides us with a concise notation here to express a semantic data model. The elements of the XML data models of each of the participants
1000
The ontology represents syntactical and semantical properties of a common overarching data model, which is agreed upon by all participants such as service (or data) provider and consumer. This model is actually a domain ontology, capturing central concepts of a domain and defining them semantically. This means that all individual XML data models can be mapped onto this common semantic model. These mappings can then be used to automatically generate transformations between different concrete participant data models. The overall information architecture is summarised in Figure 2. Although there is a standardised OWL-based equivalent for our description logic ontology, for practical reasons a corresponding semantically equivalent XML representation is needed. The
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
Figure 2. Information architecture overview (© 2008, Claus Pahl. Used with permission)
corresponding global XML schema representation for the customer element is: Here, the principle of this mapping becomes clear: ontology concepts are mapped to XML elements and specific predefined atomic concepts serve to represent simple properties that are mapped to XML attributes. We have focused on the core elements of ontologies and XML data here to highlight the principles. Description elements of XML such as different types of attributes or option and iteration in element definition can also be captured through a refined property language. In particular the Web Ontology Language OWL provides such constructs (W3C, 2004).
transformation rule construction The ontology provides a semantically defined global data model from which transformations
between different participant data representations can be derived. This construction needs to address a number of specific objectives regarding the transformation rules: •
•
Modularity of transformation rules is needed for the flexible generation and configuration of transformation rules by allowing these rules to be specific to particular data elements, Consistency needs to be addressed for the reliable generation and configuration of transformation rules by allowing semantics-preserving rules to be constructed automatically.
Based on a data-oriented domain ontology and two given local data models (source and target, expressed as XML schemas) that are mapped onto the ontology, the rule construction process is based on three steps: 1.
Define one transformation rule per concept in the ontology that is represented in the target data model.
1001
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
2.
3.
Identify semantically equivalent concepts of the selected concepts in the source data model. For each identified concept: a. determine required attributes – these are end nodes of the ontological structure, b. copy semantically equivalent counterparts from the source model.
A necessary prerequisite is that all concepts of the source model are actually supported by the target data model. Otherwise, the transformation definition cannot be completed. The transformation rules based on the sample ontology for the given customer example will be presented later on once the transformation language is introduced. These could be formulated such that data integration problem depicted in Figure 1 is formally defined. The mappings between participant data models and the data ontology define semantically equivalent representation of common agreed ontology elements in the data models. Consequently, the presented rule construction process is consistent in that it preserves the semantics in transformations. The concrete target of this construction is the chosen declarative transformation language Xcerpt. The construction process has been expressed herein abstract terms – a complete specification in terms of transformation languages such as QVT or even Xcerpt itself would have been too verbose for this context. Declarativeness and modularity provide the required flexibility for our solution, in addition to consistency that has been addressed through the semantic ontology-based data models. The construction of transformation rules is actually only the first step in the provision of XML data integration. These transformations can be constructed prior to the customer query construction and stored in rule repositories.
1002
Xcerpt background We describe Xcerpt principles and the rationale for choosing it and demonstrate how such a declarative language and its environment need to be adapted for their deployment in a dynamic, mediated service context. Xcerpt is a query language designed for querying and transforming traditional XML and HTML data, as well as Semantic Web data in the form of RDF and OWL. One of the design principles is to strictly separate the matching part and the construction part in a transformation specification, see Figure 3. Xcerpt follows a pattern-based approach to querying XML data. Figure 3 shows a transformation example for a customer array based on Figure 1. The structure of this specification is based on a construction part (CONSTRUCT) and a source query part (FROM). An output customer in CustomerArray is constructed based on the elements of an item in an arrayOfCustomer by using a pattern matching approach, identifying relevant attributes in the source and referring to them in the constructed output through variables such as Name or CompanyID. During transformation, these hold the concrete values of the selected (matched) elements. Figure 3. Declarative query and transformation specification of a customer array element in Xcerpt (© 2008, Claus Pahl. Used with permission)
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
Xcerpt distinguishes two types of specifications:
and transformation programs that integrate different data services:
•
•
•
Goal-based query programs, identified by the keyword GOAL, are executable query programs that refer to input and output resources and that describe data extraction and construction. Abstract transformation rules, identified by the keyword CONSTRUCT as in Figure 3, are function-like transformation specifications with no output resource associated.
•
Xcerpt extends the pattern-based approach, which is also used in other query and transformation languages, in following ways: •
•
Firstly, query patterns can be formulated as incomplete specifications in three dimensions. Incomplete query specifications can be represented in depth, which allows XML data to be selected at any arbitrary depth; in breadth, which allows querying neighbouring nodes by using wildcards, and in order. Incomplete query specifications allow patterns to be specified more flexibly without losing accuracy. Secondly, the simulation unification computes answer substitutions for the variables in the query pattern against underlying XML terms.
Xcerpt provides a runtime environment with an execution engine at its core (Schaffert, 2004). The central problem is to embed this type of environment, which can also be found for other query and transformation languages, into a dynamic, mediated service setting.
connector construction and Query composition We have adapted Xcerpt to support the construction of service connectors, i.e. executable query
In order to promote modularity and code reuse, individual integration rules should not be designed to perform complex transformation tasks – rather a composition of individual rules is preferable. The composition of rules through rule chaining demand the query part of a service connector to be built ahead of the construction part. The data representation of the global data model changes as element names change or elements are being removed – these should not affect the query and integration part of the rules. Only an additional construction part is needed to enable versioning of the global data model.
Modularity and incomplete query specifications turn out to be essential features that are required from a query and transformation language in our context. In order to achieve the compositionality of modular rules, a layered approach shall be taken: •
•
•
Ground rules are responsible for populating XML data in the form of Xcerpt data terms by reading XML documents from individual service providers. These ground rules are tightly coupled to individual data Web services. These rules instruct the connector where to retrieve elements of data objects. The Xcerpt data terms are consumed subsequently by non-ground queries based on intermediate composite rules. These rules are responsible for integrating ground rules to render data types in the global XML schema. However, these rules still do not produce output. Finally, the composite rules are responsible for rendering the data objects defined in the interfaces of the mediator Web services
1003
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
based on customer requests. The composite rules are views on top of ground and intermediate representations according to the global schema. Therefore, the exported data from a mediator Web service is the goal of the corresponding connector (a query program). Xcerpt is a document-centric language, designed to query and transform XML documents. Therefore, ground rules, which read individual data elements from the resources, are associated to at least one resource identifier. This is a bottom-up approach in terms of data population because data is assigned from the bottom level of the rules upward until it reaches the ultimate goal of a hierarchically structured rule. These rules are defined through an integration goal (the top-level query program) and structured into sub-rules down to ground rules. These layered rules are saved in a repository. When needed, a rule will be picked and a backward rule chaining technique for rule composition enables data objects to be populated to answer transformation requests. Rule chaining means that resulting variable bindings from a transformation rule that is used within a query program are chained with those of the query program itself. Rule chaining is used to build recursive query programs. Consistent connectors can then be constructed on the fly based on input data such as the data services and the layered rules. We apply backward goal-based rule chaining to execute complex queries based on composite rules. Figure 4 shows an example of this pattern matching approach that separates a possibly partial query into resource and construction parts. The transformation rule maps the supportIdentifier element of the customer example from Figure 1. Figure 4 is a composite rule based on the SupportIdentifier construction rule at a lower level. Figure 5 demonstrates the transformation that produces the resulting XML data for the Customer service. The output from the Customer mediator
1004
represents a customer as identified in a servicing system. In the example, rule CustomerArray is a composite rule, based on the Customer and Service rules, that could be used to answer a user query directly. The resource identifiers in form of variables and the interfaces for the data representation will be supplied to the connector generator. Rule mappings in the connector generator determine which queries are constructed from the repository for execution.
tHE MEDIAtED sErVIcE INtEGrAtION ArcHItEctUrE We propose a mediated service-based architecture for the integration of XML data in Web servicebased information systems. The major aims of the proposed mediated software architecture for the integration and mediation of XML data in the context of Web services are threefold: improved modifiability through declarative rule-based query programs, improved reusability of declarative integration rules through automated connector construction, and improved flexibility through dynamic generation of consistent, i.e. semanticspreserving connectors.
service-based Mediator Architectures A declarative, rule-based approach can be applied to the data transformation problem (Orriens et al., 2003, Peltier et al., 2002). The difficulty lies in embedding a declarative transformation approach into a service-based architecture in which clients, mediators, and data provider services are composed (Garcia-Molina et al., 1997). A data integration engine can be built in the Web service business process execution language WS-BPEL. In (Rosenberg and Dustdar, 2005), a business rule engine-based approach is introduced to separate the business logic from the executable WS-BPEL process, which demonstrates that one
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
Figure 4. Transformation specification in Xcerpt based on goal chaining with one goal-based query program and two supporting transformation rules (© 2008, Claus Pahl. Used with permission)
Figure 5. The composite rules for customer transformation in Xcerpt (© 2008, Claus Pahl. Used with permission)
1005
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
of our objectives can be achieved (Rouvellou et al., 2000). These rules, stored in a repository, can be used to dynamically create executable query and transformation programs using a consistencyguaranteeing connector or integration service as the mediator. These integration services are the cornerstones of a mediator architecture that processes composite client queries that possibly involve different data sources provided by different Web services (Wiederhold, 1992). Mediators in an architecture harmonise and present the information available in heterogeneous data sources (Stern and Davies, 2003). This harmonisation comes in the form of an identification of semantic similarities in data while masking their syntactic differences. Figures 1 and 2 have illustrated an example whose foundations we have defined in terms of an ontology in order to guarantee consistency for transformations. Zhu et al. (2004) and Widom (1995) argue that traditional data integration approaches such as federated schema systems and data warehouses fail to meet the requirements of constantly changing and adaptive environments. With the support of Web service technology, however, it is possible to encapsulate integration logic in a separate component as a mediator Web service between heterogeneous data service providers and consumers. Therefore, we build a connector construction component as a separate integration service, based on (Szyperski, 2002; Haller et al. 2005, Zhu et al., 2004, Rosenberg and Dustdar 2005). We develop an architecture where broker or mediator functionality is provided by a connector generator and a transformation engine: •
1006
The connector construction is responsible for providing connectors based on transformation rules to integrate and mediate XML documents. The connector construction generates, based on schema information and transformation rules, an executable service process that gathers information from the required resources and generates
•
a query/transformation program that compiles and translates the incoming data into the required output format. The process execution engine is responsible for the integration of XML data and mediation between clients, data providers and the connector component. The execution engine is implemented in WS-BPEL and shall access the Xcerpt runtime engine, which executes the generated query/transformation program.
The connector construction component is responsible for converting the client query, dynamically create a transformation program based on stored declarative transformation rules, and to pass all XML data and programs to the execution engine. The system architecture is explained in Figure 6 with a few sample information services from an application service provider scenario – Customer Data, E-business System, and Request Analysis Service. Exposing data sources as services is only the first step towards building a SOA solution. Without a service integrator, the data user needs to understand each of the data models and relationships of service providers. The mediator architecture has the following components: •
•
Query service. The query service is responsible for handling inbound requests from the application consumer side and transferring outbound results back. The WS-BPEL process engine handles the internal messaging of the architecture. The query service decomposes input messages into a set of pre-defined WS-BPEL processes. Mediator (BPEL) engine. A mediator engine is itself a WS-BPEL process. Mediators deliver data according to a global schema. The schema may consist of various data entities for large enterprise integration solutions.
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
Figure 6. Component view of a mediator Web service with interactions (© 2008, Claus Pahl. Used with permission)
•
Connector generation service. This component is responsible for generating connectors for transforming messages both entering the WS-BPEL engine from service clients and leaving the WS-BPEL engine from data provider services according to the global data model.
The active components, provided as information services, are complemented by two repositories: •
•
Transformation rule repository. The repository allows the reuse of rules and can support multiple versions of service providers and mediator services. Schema repository. The repository stores the WSDL metadata and the XML schema information for the Web service providers and the mediator Web service. The schema information is used to validate the XML documents at runtime before they are integrated and returned to the client applications.
connector Generation The construction of a service connector means to generate an executable Xcerpt query program
by composing each Xcerpt query with the corresponding transformation rules. In an Xcerpt query program, there is only one goal query, which will be processed first. The goal query is made up of composite transformations rules that in turn are made up of ground rules that read XML data from external resources. The process begins by expanding each composite query according to the definitional data mappings that are stored in a rule repository. The rule chaining mechanism in Xcerpt needs the goal query and all supporting queries in one query program at runtime. The Xcerpt runtime engine reads XML-based resources and populates them into data terms before the query terms can start to evaluate them. The drawback is that all resources identifiers have to be specified inside a query program rather than be passed into a query program as parameters. Consequently, we adapted the Xcerpt approach to processing transformation requests in an information integration solution. The resource identifiers are not hard-coded in ground rules in our setting in order to achieve the desired loose coupling to achieve flexibility and reusability. These resource identifiers are invisible to the connector construction service. Xcerpt does not support automatic query program construction by default, although it provides the necessary backward rule chaining technique to evaluate a chain of queries.
1007
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
We have developed a wrapper mechanism to pass the resource identifiers from the goal level down to the ground rules. Therefore, as an extension to the original Xcerpt approach, a mediatorbased data integration architecture is needed where the rules are decoupled from the resources and the only the generated Xcerpt-based connectors are integrated with the client and provider Web services. WS-BPEL code that coordinates the mediation and transformation process is generated by a connector generator for transformations within the mediator service.
FUtUrE trENDs Integration has currently been investigated from a static perspective looking at existing systems integration. We discuss emerging needs to address this as part of software evolution and legacy systems integration. Another current trend is the increasing utilisation of semantic enhancements, such as ontologies and reasoning frameworks, to support integration. We address briefly attempts of using service ontologies, which would complement the presented ontology-based information architecture. Re-engineering and the integration of legacy systems is an aspect that goes beyond the integration context we described – although the application service provider (ASP) context is a typical example of a field where ASPs currently convert their systems into service-based architectures (Seltsikas and Currie, 2002). The introduction of data transformation techniques for re-engineering activities can improve the process of re-engineering legacy systems and adopting service-oriented architecture to manage the information technology services (Zhang and Yang, 2004). Business rules often change rapidly – requiring the integration of legacy systems to deliver a new service. How to handle the information integration in the context of service management has not yet been explored
1008
in sufficient detail in the context of transformation and re-engineering. The utilisation of the semantic knowledge that is available to represent the services that make up the mediator architecture is another promising direction that would increase flexibility in terms of dynamic composition. The functionality and quality attributes of Web services can be in terms of one of the widely known service ontologies such as OWL-S or WSMO (Payne and Lassila, 2004). Abstract service descriptions can be derived from the semantic properties of the data they provide, process, or consume. Some progress has been made with respect to semantics-based service discovery and composition; the interplay between semantic data integeration and semantic service integration needs a deeper investigation. Karastoyanova et al. (2007), for instance, discuss a middleware architecture to support semantic data mediation based on semantically annotated services. Their investigation demonstrates how your semantic data mediation can be incorporated into a service-based middleware architecture that supports SOA-based development. However, the need to have an overarching semantic information architecture also becomes apparent, which supports our results.
cONcLUsION The benefit of information systems on demand must be supported by corresponding information management services. Many application service providers are currently modifying their technical infrastructures to manage and integrate information using a Web services-based approach. However, the question of handling information integration in a flexible and modifiable way in the context of service-based information systems has not yet been fully explored. The presented framework utilises semantic information integration technologies for XML data in service-oriented software architectures. The
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
crucial solutions for the information integration problem are drawn from mediated architectures and data model transformation, allowing the XML data from local schemas to be consistently transformed, merged and adapted according to declarative, rule-based integration schemas for dynamic and heterogeneous environments. We have proposed a declarative style of transformation based on a semantic, ontology-based data model, with implicit source model traversal and target object creation. The development of a flexible mediator service is crucial for the success of the service-based information systems architecture from the deployment point of view. Our solution based on the query and transformation language Xcerpt is meant to provide a template for other similar languages. One of our central objectives was to introduce an integration solution from a technical perspective. A number of extensions of our approach would strongly benefit its flexibility. Essentially, we plan to address the trends outlined in the previous section. Systems evolution and legacy system integration shall be addressed through a more transformation systems-oriented perspective on integration. We are also working on an integration of service ontologies and general data-oriented domain ontologies for service-oriented architectures.
rEFErENcEs W3C – the World Wide Web Consortium. (2004). The Semantic Web Initiative. Retrieved March 9, 2008 from http://www.w3.org/2001/sw. Alonso, G., Casati, F., Kuno, H., & Machiraju, V. (2004). Web Services – Concepts, Architectures and Applications. Berlin, Germany: Springer Verlag.
Bry, F., & Schaffert, S. (2002). Towards a Declarative Query and Transformation Language for XML and Semistructured Data: Simulation Unification. In Proceedings Intl. Conference on Logic Programming. LNCS 2401, (pp. 255-270). Heidelberg, Gerrnany: Springer-Verlag. Daconta, M. C., Obrst, L. J., & Smith, K. T. (2003). The Semantic Web – a Guide to the Future of XML, Web Services, and Knowledge Management. Indianapolis, USA: Wiley & Sons. Garcia-Molina, H., Papakonstantinou, Y., Quass, D., Rajaraman, A., Sagiv, Y., & Ullman, Y. D. (1997). The TSIMMIS approach to mediation: Data models and languages. Journal of Intelligent Information Systems, 8(2), 117–132. doi:10.1023/A:1008683107812 Haller, A., Cimpian, E., Mocan, A., Oren, E., & Bussler, C. (2005). WSMX - a semantic serviceoriented architecture. In Proceedings Intl. Conference on Web Services ICWS 200 5, (pp. 321-328). Jhingran, A. D., Mattos, D., & Pirahesh, N. H. (2002). Information Integration: A research agenda. IBM Systems Journal, 41(4), 55–62. Karastoyanova, D., Wetzstein, B., van Lessen, T., Wutke, D., Nitzsche, J., & Leymann, F. (2007). Semantic Service Bus: Architecture and Implementation of a Next Generation Middleware. In Proceedings of the Second International Workshop on Service Engineering SEIW 2007, (pp. 347-354). Lenzerini, M. (2002). Data integration: A theoretical perspective. In Proceedings Principles of Database Systems Conference PODS’02, (pp. 233-246). Orriens, B., Yang, J., & Papazoglou, M. (2003). A Framework for Business Rule Driven Web Service Composition. Jeusfeld, M.A. & Pastor, O. (Eds.), In Proceedings ER’2003 Workshops, LNCS 2814, (pp. 52-64). Heidelberg, Germany: Springer-Verlag.
1009
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
Payne, T., & Lassila, O. (2004). Semantic Web Services. IEEE Intelligent Systems, 19(4), 14–15. doi:10.1109/MIS.2004.29
Szyperski, C. (2002). Component Software: Beyond Object-Oriented Programming – 2nd Ed. New York, USA: Addison-Wesley.
Peltier, M., Bezivin, J., & Guillaume, G. (2001). MTRANS: A general framework, based on XSLT, for model transformations. In Proceedings of the Workshop on Transformations in UML WTUML’01. Retrieved 21 July 2008 from: http:// citeseer.ist.psu.edu/581336.html
Widom, J. (1995). Research problems in data warehousing. In Proceedings of 4th International Conference on Information and Knowledge Management, (pp. 25-30).
Reynaud, C., Sirot, J. P., & Vodislav, D. (2001). Semantic Integration of XML Heterogeneous Data Sources. In Proceedings IDEAS Conference 2001, (pp. 199–208). Rosenberg, F., & Dustdar, S. (2005). Business Rules Integration in BPEL - A Service-Oriented Approach. In Proceedings 7th International IEEE Conference on E-Commerce Technology, (pp. 476- 479). Rouvellou, I., Degenaro, L., Rasmus, K., Ehnebuske, D., & McKee, B. (2000). Extending business objects with business rules. In Proceedings 33rd Intl. Conference on Technology of ObjectOriented Languages, (pp. 238-249). Schaffert, S. (2004). Xcerpt: A Rule-Based Query and Transformation Language for the Web. PhD Thesis, University of Munich. Seltsikas, P., & Currie, W. L. (2002). Evaluating the application service provider (ASP) business model: the challenge of integration. In Proceedings 35th Annual Hawaii International Conference 2002. 2801 – 2809. Stal, M. (2002). Web Services: Beyond Component-based Computing. Communications of the ACM, 45(10), 71–76. doi:10.1145/570907.570934 Stern, A., & Davis, J. (2004). Extending the Web services model to IT services. In Proceedings IEEE International Conference on Web Services, (pp. 824-825).
1010
Wiederhold, G. (1992). Mediators in the architecture of future information systems. IEEE Computer, 25, 38–49. Zhang, Z., & Yang, H. (2004). Incubating Services in Legacy Systems for Architectural Migration. In Proceedings 11th Asia-Pacific Software Engineering Conference APSEC’04, (pp. 196-203). Zhu, F., Turner, M., Kotsiopoulos, I., Bennett, K., Russell, M., Budgen, D., et al. (2004). Dynamic Data Integration Using Web Services. In Proceedings 2nd International Conference on Web Services ICWS’2004, (pp. 262-269). Zhu, Y. (2007). Declarative Rule-based Integration and Mediation for XML Data in Web Servicebased Software Architectures. M.Sc. Thesis. Dublin City University.
ADDItIONAL rEADING textbooks Bass, L. Clements, & P. Kazman, R. (2003). Software Architecture in Practice. 2nd Edition. Boston, USA: Addison-Wesley. Krafzig, D., Banke, K., & Slama, D. (2004). Enterprise SOA: Service-Oriented Architecture Best Practices. Upper Saddle River, USA: Prentice Hall. Mahmoud, Q. H. (2004). Middleware for Communications: Concepts, Designs and Case Studies. Indianapolis, USA: John Wiley and Sons.
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
Articles Abiteboul, S., Benjelloun, O., & Milo, T. (2002). Web services and data integration. In Proceedings of the Third International Conference on Web Information Systems Engineering. (pp. 3-6). Bengtsson, P., Lassing, N., Bosch, J., & Vliet, H. (2004). Architecture-Level Modifiability Analysis (ALMA). Journal of Systems and Software, 69(1), 129–147. doi:10.1016/S0164-1212(03)00080-3 Bing, Q., Hongji, Y., Chu, W. C., & Xu, B. (2003): Bridging legacy systems to model driven architecture. In Proceedings 27th Annual International Computer Software and Applications Conference COMPSAC 2003. (pp. 304- 309). Bolzer, M. (2005). Towards Data-Integration on the Semantic Web: Querying RDF with Xcerpt. Master Thesis. University of Munich. Calvanese, D., Giacomo, G., Lenzerini, M., & Nardi, D. (2001). Data Integration in Data Warehousing. International Journal of Cooperative Information Systems., 10(3), 237–271. doi:10.1142/ S0218843001000345 Djuric, D. (2004). MDA-based Ontology Infrastructure. Computer Science and Information Systems, 1(1), 91–116. doi:10.2298/CSIS0401091D
Li, S.-H., Huang, S.-M., Yen, D. C., & Chang, C.-C. (2007). Migrating Legacy Information Systems to Web Services Architecture. Journal of Database Management, 18(4), 1–25. Milanovic, N., & Malek, M. (2004). Current solutions for Web service composition. IEEE Internet Computing, 8(6), 51–59. doi:10.1109/ MIC.2004.58 Milo, T., & Zohar, S. (1998). Using Schema Matching to simplify heterogeneous Data Translation. Proceeding of the Int’l VLDB Conference. (pp. 122-133). Oquendo, F. (2006). π-Method: a model-driven formal method for architecture-centric software engineering. SIGSOFT Software Engineering Notes, 31(3), 1–13. doi:10.1145/1127878.1127885 Pahl, C. (2007). An Ontology for Software Component Description and Matching. International Journal on Software Tools for Technology Transfer, 9(2), 169–178. doi:10.1007/s10009006-0015-9 Pahl, C. (2007). Semantic Model-Driven Architecting of Service-based Software Systems. Information and Software Technology, 49(8), 838–850. doi:10.1016/j.infsof.2006.09.007
Hasselbring, W. (2000). Information System Integration. Communications of the ACM, 43(6), 32–36. doi:10.1145/336460.336472
Rahm, E., & Bernstein, A. (2001). A Survey of Approaches to Automatic Schema Matching. The VLDB Journal, 10(4), 334–350. doi:10.1007/ s007780100057
Hasselbring, W. (2002). Web data integration for e-commerce applications. IEEE MultiMedia, 9(1), 16–25. doi:10.1109/93.978351
Selic, B. (2003). The Pragmatics of Model-Driven Development. IEEE Software, 20(5), 19–25. doi:10.1109/MS.2003.1231146
Lehti, P., & Fankhauser, P. (2004). XML data integration with OWL: experiences and challenges. In Proceedings 2004 International Symposium on Applications and the Internet. (pp. 160-167).
Sheth, A. P., & Larson, A. (1990). Federated database systems for managing distributed, heterogeneous, and autonomous databases. ACM Computing Surveys, 22(3), 183–236. doi:10.1145/96602.96604
Levy, A. (1998). The information manifold approach to data integration. IEEE Intelligent Systems, 13, 12–16.
1011
Consistency and Modularity in Mediated Service-Based Data Integration Solutions
Velegrakis, Y., Miller, R., & Mylopoulos, J. (2005). Representing and Querying Data Transformations. In Proceedings of the 21st International Conference on Data Engineering ICDE’05. (pp. 81-92). Willcocks, P., & Lacify, C. (1998). The sourcing and outsourcing of IS: Shock of the New? P. Willcocks and C. Lacity (Eds.), Strategic Sourcing of Information Technology: Perspective and Practices. Chichester, UK: Wiley.
Yang, Y., Peng, X., & Zhao, W. (2007). An Automatic Connector Generation Method for Dynamic Architecture. In Proceedings International Computer Software and Applications Conference COMPSAC 2007. (pp. 409-414).
standards Object Management Group. (2003). ModelDriven Architecture MDA Guide V1.0.1. OMG.
This work was previously published in Services and Business Computing Solutions with XML: Applications for Quality Management and Best Processes, edited by Patrick Hung, pp. 98-113, copyright 2009 by Business Science Reference (an imprint of IGI Global).
1012
1013
Chapter 4.8
Data Warehouse and Business Intelligence Systems in the Context of E-HRM Martin Burgard Saarland University, Germany Franca Piazza Saarland University, Germany
INtrODUctION The increased use of information technology leads to the generation of huge amounts of data which have to be stored and analyzed by appropriate systems. Data warehouse systems allow the storage of these data in a special multidimensional data base. Based on a data warehouse, business intelligence systems provide different analysis methods such as online analytical processing (OLAP) and data mining to analyze these data. Although these systems are already widely used and the usage is still growing, their application in the area of electronic human resource management (e-HRM) is rather scarce. Therefore, the objective of this chapter is to depict the components and DOI: 10.4018/978-1-59904-883-3.ch034
functionality of these systems and to illustrate the application possibilities and benefits of these systems by selected application examples in the context of e-HRM.
bAcKGrOUND In the past the importance of data warehouse and business intelligence systems has continuously increased and the rate of companies using a data warehouse and/or a business intelligence system is rather high (e.g., Watson, Annino, Wixom, Avery, & Rutherford, 2001). An increasing number of case study publications (e.g., Marks & Frolick, 2001; Watson, Wixom, Hoffer, Anderson-Lehman & Reynolds, 2006) and general literature for practitioners (e.g., Humphries, Hawkins, & Dy, 1999)
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Data Warehouse and Business Intelligence Systems in the Context of E-HRM
are further indicators showing the ever-growing importance of these systems. On the other hand, publications concerning these system categories in the context of e-HRM, except for short discussions to some isolated topics as online recruiting (Lin & Stasinskaya, 2002), enterprise resource planning (Ashbaugh & Miranda, 2002), or human resource information systems (Kovach, Hughes, Fagan, & Magitti, 2002) are rather scarce. Data warehouse and business intelligence systems are commonly used in sales or marketing departments. In contrast, their use in HR departments is relatively low (Watson et al., 2001). However, the adoption of these systems in the context of eHRM offers new potentials to the management of human resources. In the following their technical and functional aspects are depicted. A data warehouse is defined as a “subjectoriented, integrated, non-volatile and time-variant collection of data in support of management’s decisions” (Inmon, 2005, p. 29). So the main task of the data warehouse is to integrate the data from a variety of different source systems existing inside and outside a company in a single data base and to store the data in a multidimensional structure which is optimized to support the management’s analysis activities. In doing so the operative systems are no longer charged with the reporting requests of the management which resulted in poor system performance. The data warehouse is the core component of the data warehouse system which further consists of several components (see Figure 1): the extraction, transformation, and loading system (ETLsystem), the administration system, the archiving system, and the metadata repository. To integrate the data in the data warehouse the ETL-system enables the extraction of data from different source systems. Furthermore the ETLsystem transforms the data to eliminate syntactic and semantic defects and harmonizes the structure and value of the data. After the transformation the relevant data is loaded into the data warehouse.
1014
The data warehouse stores the data in multidimensional data structures, so-called cubes, in order to provide optimized analysis possibilities (e.g., Inmon, 2005). The administration system supports the data modeling, the ETL-scheduling, the user administration, and the monitoring. As some data become obsolete and do not have to be accessible for the ongoing management analysis they can be stored in the archiving system. Furthermore the archiving system allows backup of the data. To manage the huge amount of data stored in a data warehouse, information about the data, so-called metadata, such as calculation rules, content description, or usage documentation, is necessary. The depicted components constitute the data warehouse system which is the basis for analysis activities performed by using a business intelligence system. Business intelligence systems subsume different technologies and methods to access and analyze the data stored in a data warehouse (Turban, Aronson, & Liang, 2005). The core components of business intelligence systems are OLAP and data mining. Figure 1. Reference architecture of data warehouse and business intelligence systems
Data Warehouse and Business Intelligence Systems in the Context of E-HRM
Although there is no explicit definition of OLAP in literature the common understanding of OLAP refers to the possibilities to consolidate, view, and analyze data according to multiple dimensions (Codd, Codd, & Salley, 1993). The fast analysis of shared multidimensional information (FASMI) concept characterizes OLAP by means of five attributes: fast, analysis, shared, multidimensional, and information (Pendse & Creeth, 1995). The attribute fast refers to the requirement that the data request has to be fulfilled quickly and the user performing the analysis does not have to wait long for a system response. Following the FASMI concept an average timeframe of five seconds characterizes the response time of OLAPrequests. Analysis, as a further attribute of OLAP, denotes the ability to cope with business logic and to facilitate the user’s analysis activities. The data can be analyzed interactively by the user without any programming knowledge. The attribute shared refers to the need to implement the necessary security and integrity requirements to be fulfilled by OLAP. Customarily several users try to analyze the same data at the same time. Hence an appropriate update locking and authorization concept has to be assured. Furthermore, multidimensionality constitutes a characteristic of OLAP. OLAP provides a multidimensional conceptual view of the data which means that relevant measures can be analyzed by multiple different dimensions. In addition, OLAP provides the possibilities to analyze the data on different hierarchical levels. Measures such as headcount, for instance, can be analyzed on a detailed disaggregated level (e.g., headcount of a certain department) and on an aggregated level (e.g., headcount of the whole company). The operations for such an interactive hierarchical analysis are called drill-down and roll-up. Finally information as the fifth attribute refers to the allocation of the requested data in a transparent way. The FASMI-concept integrates especially the user requirements and delivers an appropriate characterization of OLAP.
Besides OLAP, data mining constitutes another core component of business intelligence systems. Data mining aims at revealing unknown patterns from large databases (Fayyad, Piatetsky-Shapiro, & Smyth, 1996). A variety of methods from different areas such as statistics, machine learning, and artificial intelligence are subsumed under the concept of data mining. All of these methods aim at generating hypotheses out of the data without any precedent assumption of coherences in the data. This constitutes the main difference to OLAP. OLAP is based on the user’s ability to overlook the data which describe a specific situation. But especially, in very complex decision situations, where the coherences of data are unknown and hence can not be analyzed by user driven interactive queries, data mining can provide further information. The most important functions of data mining are classification, segmentation, and association analysis. Classification consists of examining the stored data and assigning them to one of a predefined set of classes. Based on preclassified data, a model is automatically generated. This model allows assigning of unclassified data to a certain class. Applied methods are, for example, rule induction, discriminant analysis, or multilayer perception (Berry & Linoff, 2004; Cho & Ngai, 2003). Segmentation is the task of grouping heterogeneous data into more homogenous subgroups, so-called clusters. In contrast to classification, segmentation does not rely on predefined classes but rather generates clusters during the segmentation process. It is up to the user to determine the meaning of the resulting clusters. Applied segmentation methods are, for example, cluster analysis or self-organizing maps (Berry & Linoff, 2004). The association analysis aims at finding frequently appearing combinations of data. It can be distinguished into two types. While the static association analysis identifies simultaneously existent combinations, the sequential association analysis includes the temporal aspect and discov-
1015
Data Warehouse and Business Intelligence Systems in the Context of E-HRM
ers the successive appearance of data (Agrawal, Imelienski, & Swami, 1993; Agrawal & Srikant, 1995).
tHE ADOPtION OF DAtA WArEHOUsE AND bUsINEss INtELLIGENcE systEMs IN tHE cONtEXt OF E-HrM Although the use of data warehouse and business intelligence systems in the context of e-HRM is rather scarce, usage offers a high potential to support and improve HRM. These systems can be adopted in all business areas where comprehensive, sophisticated, and data intensive analyses are required to support decisions. Therefore, their adoption is appropriate for all HR tasks such as recruitment, appraisal, development, and compensation. Following Figure 1, a data warehouse containing only personnel relevant data can be referred to as HR data warehouse. Thereby, the main reason to adopt a data warehouse system in the context of e-HRM is the lack of consistency of HR information available in existing separate HR information systems and hence the impossibility of appropriate analysis and reporting (Van Wessel, Riebers, & de Vries, 2006). This inconsistency is caused by a fragmented system-landscape supporting the HR tasks. HR applications often have been developed in different functional areas (e.g., payroll, benefits, staffing, time and attendance) or geographic regions (e.g., North America, Europe, Asia Pacific). An HR data warehouse allows the integration of relevant personnel data from the internal and external HR data sources and enables comprehensive analysis opportunities by employing business intelligence systems. In consideration of the proceeding outsourcing of HR activities and the loss of personal relationships between the employee and the HR department (Lepak & Snell, 1998), the ability to generate information about the employees out of HR data will gain
1016
more importance for the HR management (e.g., Srinivasa & Saurabh, 2001). In the following an exemplary application of an HR relevant analysis using OLAP is depicted. For the HR management a detailed report on the number of sick persons for instance might be important. The relevant personnel data are stored in a cube (see Figure 2) containing the dimensions division, region, and time, and the measure number of sick persons. As OLAP is not limited to three dimensions, further dimensions, such as line manager, qualification, and so forth, are conceivable. According to the depicted characteristics of OLAP an HR manager can navigate by using different operations; for instance, with the operation slicing it is possible to filter the number of sick persons in 2006 for all divisions in all regions. The operation dicing enables one to see the number of sick persons for 2004 and 2005 for the finance and production divisions for America and Europe, for example. Furthermore, OLAP supports a hierarchical analysis. In this context it might be of interest to analyze the number of sick persons of the finance division in America in 2006 in more detail. Using drill-down, a detailed report on the number of sick persons for every quarter in 2006 can be generated (see Figure 3). Furthermore drill-down is not restricted to one level so HR managers can also navigate to months, weeks, and days to receive more specific information. In return the roll-up operation allows for the analyzing of data on a more aggregated level. These examples illustrate the analysis possibilities of OLAP which can be employed in any HR task such as recruitment, appraisal, compensation, and development. OLAP is based on the user’s ability to survey the data and to navigate correspondently. While a HR data warehouse stores HR data and the actual coherences of the data are not always obvious, data mining extends the analysis possibilities to the context of complex, data-rich situations. Data mining reveals unknown patterns out of the data which can be used to sup-
Data Warehouse and Business Intelligence Systems in the Context of E-HRM
Figure 2. Illustration of an OLAP cube and examples for slicing and dicing
port HR relevant decisions. In the following exemplary applications of data mining in selected HR contexts are depicted. The data mining function classification can be used to support and accelerate the applicant selection process for example. Using the classification methods applicants can be allocated to predefined classes indicating the productivity of an employee. The classes’ high, medium, and low sales premiums might serve as indicators showing the productivity of an employee, for example (Cho & Ngai, 2003). The allocation is performed
automatically by the classification algorithm and is based on the stored attributes describing an applicant. The unknown pattern revealed by data mining consists on the relationship between application data and productivity. This will enable an automatic preselection of potential employees and hence support the HR manager to select employees with assumable high productivity. A lot of further application possibilities are conceivable such as the analysis of the termination behavior of employees. It might be of interest to identify the employees, who tend to sign off, and develop
Figure 3. Example for the OLAP operations drill-down and roll-up
1017
Data Warehouse and Business Intelligence Systems in the Context of E-HRM
corresponding retention measures. Based on historical data concerning terminations, classification methods provide the possibility to automatically classify all employees into the classes of potentially terminating and nonresigning employees. This information can be used by the HR department to develop target-oriented retention measures (Min & Emam, 2003). Segmentation, as a further function of data mining, generates homogeneous clusters of employees which can be used, for example, to create a personnel portfolio. Hence, the segmentation reveals additional information about the employees concerning their affiliation to a certain cluster. So it might be conceivable to generate clusters based on the attributes qualification, gender, age, and position of an employee which leads to homogeneous clusters of employees concerning the named attributes such as the cluster of the old, high qualified managers and the cluster of the young, high qualified specialists. Based on this information the development of cluster-specific training measures, for instance, is possible. Finally, the association function can deliver useful information for the HR department. The association analysis reveals frequently appearing combinations of data. Based on the personnel data stored in a HR data warehouse the association analysis might reveal, for example, frequently appearing combinations of a particular meritappraisal and a certain supervisor. This information can be used for further examinations concerning the bias of this supervisor. Furthermore, the integration of the temporal aspect into the analysis can be reached by applying the sequential association analysis. So it might be of interest to the HR manager to reveal which successive steps are actually necessary to reach a certain position in the company. The result of the sequential analysis is a frequently used career path to reach this position (e.g., the employee attends a management seminar then receives a high merit-rating, then works in a foreign country, and finally gets the promotion). These revealed existing career paths
1018
can be compared to the career planning and lead to adjustments if significant differences appear. The depiction of the selected possibilities to adopt data mining in the context of e-HRM could reveal that the main potentials and benefits depend on the decision support in complex and data-rich decision situations to enable the creation of more target-oriented personnel measures.
FUtUrE trENDs Considering the growing amount of HR data and the corresponding necessity to store and analyze these data, the application of data warehouse and business intelligence systems enable the HR management to handle these data and perform comprehensive HR planning and controlling. These systems will develop standard analytical systems in all business areas, as well in the HR area. In this article some application possibilities and their benefits were introduced, nevertheless research is necessary to systematically evaluate the application of these systems in the context of e-HRM. Research should cover recruitment, compensation, appraisal, and development as core tasks of HRM and develop reference data models and application scenarios to improve the decision-making process of HR managers. These systems are very complex and cost-intensive, hence their implementation and employment do not automatically lead to benefits for the HRM. Reference data models and application scenarios facilitate the use of these systems and enable the exhaustion of the potential benefits introduced in this article.
cONcLUsION New technologies such as data warehouse and business intelligence systems enable consolidated storage and innovative analysis of personnel data. In this article the technical and functional
Data Warehouse and Business Intelligence Systems in the Context of E-HRM
aspects of data warehouse and business intelligence systems were introduced. Based on the general functionalities of these systems their application and possible benefits in the context of e-HRM were depicted. It could be shown that both systems are applicable in e-HRM. The HR data warehouse system serves as an integrated data storage which contains relevant personnel data from multiple widespread HR-applications existing in a company. The analysis methods provided by the business intelligence system extend the customarily applied methods of HR planning and controlling. Thereby OLAP allows the interactive multidimensional analysis of personnel data and data mining reveals hidden coherences in the data which can be used to optimize decision-making processes of HR managers.
Cho, V., & Ngai, E. (2003). Data mining for selection of insurance sales agents. Expert Systems: International Journal of Knowledge Engineering and Neural Networks, 20(3), 123–132. doi:10.1111/1468-0394.00235
rEFErENcEs
Humphries, M., Hawkins, M., & Dy, M. (1999). Data warehousing: Architecture and implementation. Upper Saddle River, NJ: Prentice Hall.
Agrawal, R., Imelienski, T., & Swami, A. (1993). Mining association rules between sets of items in large databases. In P. Buneman & S. Jajodia (Eds.), Proceedings of the ACM SIGMOD Conference on Management of Data (pp. 207-216). Washington, D.C.: ACM Press. Agrawal, R., & Srikant, R. (1995). Mining sequential patterns. In P. S. Yu & A. Chen (Eds.), Proceedings of the International Conference on Very Large Data Bases (VLDB) (pp. 3-14). Taipei, Taiwan: IEEE Computer Society. Ashbaugh, S., & Miranda, R. (2002). Technology for human resources management: Seven questions and answers. Public Personnel Management, 31(1), 7–20. Berry, M., & Linoff, G. S. (2004). Data mining techniques for marketing, sales and customer relationship. Indianapolis: Wiley.
Codd, E.-F., Codd, S.-B., & Salley, C.-T. (1993). Providing OLAP to user analysts: An IT-mandate. White paper Codd associates. Retrieved September 6, 2006, from http://www.dev.hyperion.com/ resource_library/white_papers/providing_olap_ to_user_ analysts.pdf Fayyad, U. M., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to knowledge discovery: an overview. In U. Fayyad, G. Piatetsky-Shapiro, P. Smyth, & R. Uthurusamy (Eds.), Advances in knowledge discovery and data mining (pp. 1-34). Camebridge MA: The MIT Press.
Inmon, W.-H. (2005). Building the data warehouse. Indianapolis: Wiley Publishing. Kovach, K., Hughes, A., Fagan, P., & Maggitti, P. (2002). Administrative and strategic advantages of HRIS. Employment Relations Today, 29(2), 43–48. doi:10.1002/ert.10039 Lepak, D., & Snell, S. A. (1998). Virtual HR: Strategic human resource management in the 21st century. Human Resource Management Review, 8(3), 215–234. doi:10.1016/S1053-4822(98)90003-1 Lin, B., & Stasinskaya, V. (2002). Data warehousing management issues in online recruiting. Human Systems Management, 21(1), 1–8. Marks, W., & Frolick, M. (2001). Building customer data warehouses for a marketing and service environment: A case-study. Information Systems Management, 18(3), 51–56. doi:10.1201/1078/4 3196.18.3.20010601/31290.7
1019
Data Warehouse and Business Intelligence Systems in the Context of E-HRM
Min, H., & Emam, A. (2003). Developing the profiles of truck drivers for their successful recruitment and retention: A data mining approach. International Journal of Physical Distribution & Logistics Management, 33(2), 149–162. doi:10.1108/09600030310469153 Pendse, N., & Creeth, R. (1995). The OLAP report. Retrieved September 6, 2006, from http://www. olapreport.com Srinivasa, R., & Saurabh, S. (2001). Business intelligence and logistics (white paper). Retrieved September 18, 2006, from http://www.dmreview. com/whitepaper/wid328.pdf Turban, E., Aronson, J., & Liang, P. (2005). Decision support systems and intelligent systems (7th ed.). Saddle River, NJ: Prentice Hall. Van Wessel, R., Ribbers, P., & de Vries, H. (2006). Effects on IS standardization on business process performance: A case in HR IS company standardization. In R. H. Spraque (Ed.), Proceedings on the 39th Hawaii International Conference on System Sciences. Los Alamitos, CA: IEEE Computer Society Press. Watson, H. J., Annino, D. A., Wixom, B. H., Avery, K. L., & Rutherford, M. (2001). Current practices in data warehousing. Information Systems Management, 18(1), 47–56. doi:10.1201/1 078/43194.18.1.20010101/31264.6 Watson, H.-J., Wixom, B. H., Hoffer, J. A., Anderson-Lehman, R., & Reynolds, A. M. (2006). Real-time business intelligence: Best practices at continental airlines. Information Systems Management, 23(1), 7–18. doi:10.1201/1078.10580530/ 45769.23.1.20061201/91768.2
KEy tErMs AND DEFINItIONs Business Intelligence System: Subsumes different technologies and methods such as OLAP and data mining to analyze the data stored in a data warehouse. Data Mining: Subsumes a variety of methods to extract unknown patterns out of a large amount of data. Data mining methods originate from the area of machine learning, statistics, and artificial intelligence. The main tasks of data mining are classification, segmentation, and association analysis. Data Warehouse: A subject-oriented, integrated, time-variant, and nonvolatile collection of data. The data are usually stored in multidimensional cubes, an optimized way to provide data for analyze purposes. Data Warehouse System: Consists of several components. The core component is a data warehouse as a database and further components are ETL-system, administration system, archiving system, and metadata repository. HR Data Warehouse: A specific data warehouse for HR data. The company’s HR relevant data is collected, integrated, and stored with the aim of producing accurate and timely HR management information and supporting data analysis. OLAP Operations: To analyze the data in OLAP cubes navigation and hierarchical analysis are differentiated. Navigation operations are, for instance, slicing and dicing. Roll-up and drilldown support the hierarchical analysis. Thereby, the roll-up operation allows for analyzing data on a more aggregated level and drill-down is the reverse operation. Online Analytical Processing (OLAP): Refers to the possibilities to consolidate, view, and analyze data according to multiple dimensions. The fast analysis of shared multidimensional information (FASMI) concept characterizes OLAP by means of five attributes.
This work was previously published in Encyclopedia of Human Resources Information Systems: Challenges in e-HRM, edited by Teresa Torres-Coronas and Mario Arias-Oliva, pp. 223-229, copyright 2009 by Information Science Reference (an imprint of IGI Global). 1020
1021
Chapter 4.9
Implementation of ERP in Human Resource Management Zhang Li Harbin Institute of Technology, China Wang Dan Harbin Institute of Technology, China Chang Lei Harbin Institute of Technology, China
INtrODUctION In 1999, Peter Drucker said: “A new Information Revolution is well under way. It is not a revolution in technology, machinery, techniques, software or speed. It is a revolution in concepts.” As a result of information technology (IT) innovation and reorganization, enterprise resource planning (ERP) was proposed by the Gartner Group in the early 1990s. It is a successor to manufacturing resource planning (MRP II) and attempts to unify all departmental systems together into a single, integrated software program that runs off a single database so that the various departments can more easily share information and communicate with each other (Koch, 2002). Over 60% of the U.S Fortune 500 had adopted DOI: 10.4018/978-1-60566-026-4.ch292
ERP by 2000 (Kumar, & Hillegersberg, 2000; Siau, 2004), and it was projected that organizations’ total spending on ERP adoptions was an estimated $72.63 billion in 2002 (Al-Marshari, 2002). Many scholars have recognized the importance of people in organizations, and this viewpoint is the central focus of the human resource management (HRM) perspective (Pfeffer, 1995). In this perspective, HRM has the potential to be one of the key components of overall enterprise strategy. Additionally, HRM may provide significant competitive advantage opportunities when they are used to create a unique (i.e., difficult to imitate) organizational culture that institutionalizes organizational competencies throughout the organization (Bowen & Ostroff, 2004). Typically, an ERP system supports HRM, operation and logistics, finance, and sales and marketing
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Implementation of ERP in Human Resource Management
functions (Davenport, 1998) (see Figure 1). But the early development stage of ERP in enterprises was all along with the center of production and sales course. Until recently, research has empirically supported the positive relationship between corporate financial performance and HRM function, and managers have also realized that HRM can deliver organizational excellence and competitive advantage for enterprises (Boudreau & Ramstad, 1997; Huselid, 1995; Wright, McMahan, Snell, & Gerhart, 2001). The HRM module was introduced into ERP, forming a highly integrated and efficient resource system with the other function modules of ERP. However, there are still many HRM-related problems that may result in the failure of ERP projects arising. So, there have been regular appeals to scholars for more research about the implementation of ERP systems in the HRM perspective in the last few years (Barrett & Mayson, 2006). This article introduces the functions of an HRM module in ERP systems from the fields of human resource planning, recruitment management, training management, time management, performance management, compensation management, and business trip arrangement. Then it analyzes five HRM-related problems that may block the enterprises from implementing ERP successfully, and it provides reasonable recommendations. Finally, the article discusses future trends and Figure 1. Function modules of an ERP system
1022
suggests emerging research opportunities within the domain of the topic.
bAcKGrOUND ERP, a term coined by the Gartner Group, is not simply a tool that provides singular outputs, but rather an infrastructure that supports the capabilities of all other IT-based tools and processes utilized by a firm (Enslow, 1996). Shang and Seddon (2000) classified the different types of ERP benefits as: IT infrastructure benefits, operational benefits, managerial benefits, strategic benefits, and organizational benefits. Palaniswamy (2002) pointed out that the failures of ERP projects were not because the software were coded incorrectly, rather the companies failed to understand the real organizational needs and systems required to solve their problems to improve performance. Lynne, Axline, Petrie, and Cornelis (2000) analyzed the adopters’ problems with ERP including project phase problems, problems with product and implementation consultants, shakedown phase problems, underestimating data quality problems and reporting needs, and so on. Within the managerial literatures, a coherent approach provides a conceptual basis for asserting that human resource is a key source of competitive advantages, since it offers a unique contribution to value creation, rarity, imperfect imitability, and non-substitutability of a firm’s strategic resources (Bellini & Canonico, 2007). Stone (2007) considered the past, present, and future of HRM theory and research. He concluded that HRM theory and research has considerable potential to enhance organizational efficiency and effectiveness. Ashbaugh and Rowan (2002) summarized the technology features of a modern HRM system (see Table 1). In addition, some scholars have already studied the relationship or connection of ERP implementation with HRM. For instance, Ashbaugh and Rowan (2002) argued that the major difference
Implementation of ERP in Human Resource Management
Table 1. Technology features of modern HRM system Integration Common relational database Flexible and scalable technology Audit trail and drill down capabilities Robust security Workflow
User friendliness Enhanced reporting and analysis Process standardization and malleability Internet and capabilities Document management and imaging
between ERP and its predecessors (e.g., MRP II) is the linkage of financial and HRM applications through a single database in a software application that is both rigid and flexible. Wright and Wright (2002) listed two of the most-cited HRM risks in an ERP system: lack of user involvement and inadequate training. Hsu, Sylvestre, and Sayed (2006) supplied another often-overlooked HRM factor when implementing an ERP system—that is, the result of high stress levels on the staff, particularly in the finance or accounting departments, which are already under stress from the heavy workload in a legacy system. Li (2001) studied the HRM function module in an ERP system. He insisted that the practical HRM system should be built up to improve incentive mechanism and to strengthen the training of employees while applying ERP.
IMPLEMENtAtION OF ErP rELAtED tO HrM Functions of the HrM Module in ErP system We have studied the necessity and essentiality of HRM to the implementation of ERP in the preceding part of this article. The adoption of ERP also greatly impacts HRM by extending its functions to the all-direction management category (see Figure 2). The functions of HRM have developed from simple compensation calculating and personnel management to the fields of human resource planning, recruitment management, training manage-
ment, time management, performance management, compensation management, and business trip arrangement (Ahmad & Schroeder, 2003; Li, 2001; Stone, 2007). Data from all function systems will be collected into a central database, and the database can further supply data needed for all function systems by integration.
Human Resource Planning Based on the requirements of enterprises, managers can use the HRM module of an ERP system to establish human resource planning conveniently. The ERP system assists the decision making of managers by simulating the performance of human resource planning and comparing the data. Additionally, the ERP system is also able to analyze or forecast the human resource planning costs by integrating relevant information.
Recruitment Management Recruitment should be taken as a significant investment because human resources are the foundational assets of an enterprise. To keep advantages in competition, the human resources department must have a reasonable recruitment system to select talents for enterprise. The ERP system can support recruitment management in three ways. First, it optimizes the recruitment process to reduce the workload. Second, it offers scientific management to recruitment costs. Third, it provides useful information for the decision making on recruitment management.
Training Management Training in the use of multiple skills including process improvement skills, which can provide long-term work-life security rather than job security (Schonberger, 1994). The implementation of ERP can help train employees to acquire technical, interpersonal, and business skills required to become fully participating team members in the
1023
Implementation of ERP in Human Resource Management
Figure 2. Traditional and extended functions of HRM
early stage of team development (Pasmore & Mlot, 1994). In other team development stages, by giving support to the human resource department to make an appropriate training plan, the ERP system can also help train team members to accept new skills, improved management regulations, and so on.
Time Management Time management may support the planning, controlling, and management processes of HRM. It means to arrange the time table for the enterprises and staff flexibly according to the local calendar. The ERP system can record the attendance rate and other relevant information by using a Telematics Control Unit (TCU). For example, data related to the compensation will be further processed in the compensation management system.
1024
Performance Management Performance evaluation might consider the following issues: How are the facilitative and operational activities allocated to individuals in an organization, and how does the facilitative content of a task vary in an organization (Nilakant, 1994). The human resource department can establish an evaluation index system according to these issues. By integrating the performance management system with the time management system, the ERP system will record data in a central database and keep relevant data timely for each evaluation index. These data will be useful to the decision making of managers on corporate strategy, too.
Implementation of ERP in Human Resource Management
Compensation Management A reasonable compensation system should be able to apply proper calculation methods in terms of different regions, departments, positions, and so forth. The implementation of ERP will achieve this objective by integrating the compensation management system with other systems (e.g., timing management system, performance management system) so that it can update relevant data in a timely fashion so as to establish a dynamic compensation calculation system. The human resource department can simulate the performance of the calculation system to forecast compensation information needed and to adjust the structure of the compensation management system. This is an excellent improvement because it decreases management costs as well as problems caused by the intervention of manpower. Compensation management also includes other functions such as salary payments, loans for staff, and so forth.
Business Trip Arrangement A business trip arrangement system can control the whole flow of a business trip from application to ratification and reimbursement. These data will be further processed in other function modules of ERP (e.g., finance module) through systems integration.
Main HrM-related Problems of ErP Implementation in Enterprises ERP has been broadly applied in enterprises for nearly 20 years because of the enormous potential economic benefits. However, 53% of the ERP projects in U.S. firms were failed by 1996, and the success rate of ERP projects in Chinese enterprises was only less than 20% by 2002 (Edwards, 1999; Yang & Zhao, 2003). Hsu et al. (2006) pointed out that HRM factors played a significant role in almost all failed information systems. This article analyzes five main HRM-related problems that
may block the success of ERP projects in enterprises (Lynne et al., 2000; Wright & Wright, 2002; Hsu et al., 2006; Sun, & Xu, 2006). First is the shortage of professionals, especially the inter-disciplinary talents who are expert at both IT and management. ERP is not just an advanced technique, it is also an advanced management concept. Therefore, the inter-disciplinary professionals are crucial to the success of ERP. This problem is especially serious in the small and medium-sized enterprises because of the weakness of strength and management level. Second is the deficient talent introduction mechanism. Employees who take charge of the ERP project of an enterprise are under high pressure because they take great responsibilities for the success of the implementation. If an enterprise does not have an effective talent introduction mechanism to attract talented people who are needed, it will be hard to start the ERP project at all, not to mention the successful application. Third is the insufficient education and training for employees. The cultivation of talents is a process that needs much time investment as well as money, while some enterprises do not want to pay much investment to the education and training of employees due to lack of an in-depth understanding of ERP. Fourth is the poor incentive mechanism for employees. Above all, the compensation mechanisms of enterprises may not be attractive enough. A competitive compensation mechanism cannot only attract applicants, but it also prevents the job hopping of employees. Still, the ERP projects have not been supported enough by superior administration departments. For example, the application of ERP needs the support of all involved departments, while managers hesitate to place departmental backbones on the ERP implementation. It may also influence the working enthusiasm of employees and kill more innovations if superiors interfere too much with the implementation. Fifth is the lack of exterior consultation or a supervision system. An enterprise must pass the
1025
Implementation of ERP in Human Resource Management
scientific verification of experts if it wants to adopt ERP. The implementation must be carried out under the direction of exterior professional organizations and the supervision of a special system. Enterprises may neglect the necessity and importance of these functions.
•
rEcOMMENDAtIONs
•
To implement ERP successfully, enterprises must pay attention to such tasks of HRM as follows (Lynne et al., 2000; Ashbaugh & Rowan, 2002; Sun & Xu, 2006): •
•
•
1026
Enterprises should realize the importance of human resources. An ERP system is a production of IT, while human resources are part of the essential power of the invention or improvement of IT. The idea that HRM is just a secondary function must be updated. Enterprises should redesign the functions of HRM. ERP extends the traditional functions of HRM greatly. The simple compensation calculating or personnel management cannot meet the requirements of an ERP system. The extended functions of HRM are more favorable for inter-departmental cooperation, because they can conveniently provide the decision making with information needed. Enterprises should establish an effective talent introduction mechanism and an effective incentive mechanism for employees. An attractive compensation mechanism can provide useful market competitive advantage for enterprises to hold talents. On the other hand, enterprises should supply employees with reasonable freedom to make decisions within position responsibilities. This can enhance the working initiative as well as the sense of responsibility of employees.
Enterprises should place great emphasis on personnel education and training. The human resource department can carry an ideological education that involves both senior managers and common staff to arouse their concerns about ERP implementation. The frequency and quality of personnel training should be strengthened as well. Enterprises should enhance exterior consultation as well as the supervision function for the adoption of ERP. It is a good idea to invite exterior experts to enterprises to make practical guidance. The human resource department can also organize staff to visit the enterprises that implement ERP successfully to draw on experience. Additionally, the human resource department should be in charge of the supervision function or help your enterprise establish a supervision department, made up of interior managers and exterior experts.
FUtUrE trENDs Enterprise resource planning is a new concept introduced by the Gartner Group in 2000 to label the latest extensions to ERP (Classe, 2001). The new concept is that, having successfully integrated internal business applications such as finance, sales, and marketing to increase efficiency and create a total overview of the business, ERP II can be used to integrate external applications with collaborative commerce arrangements, e-business, and the supply chain (Payne, 2002). Traditional ERP is the main component in an ERP II system, but for the purposes of the collaboration, an ERP II system is opened to inflow and outflow of information (Moller, 2003). On the other hand, during the last decade, a new wave of human resource technology known as electronic human resource management (e-HRM) has emerged with the advent of intranet- and Internet-based technologies. E-HRM is mainly connecting staff and managers
Implementation of ERP in Human Resource Management
with the human resource department electronically through the human resource portal (Lai, 2006). The basic expectations are that using e-HRM will decrease costs, will improve the human resource service level, and will give the human resource department space to become a strategic partner (Rual, Bondarouk, & Velde, 2007). When we combine the ERP II concept with e-HRM technology, we find a valuable issue to study—that is, the research about the implementation of ERP II based on e-HRM. Additionally, we can also study other details of this issue according to different countries, cultures, industries, and so forth.
cONcLUsION As IT continues to development, there will be more and more enterprises adopting ERP systems. ERP can extend the traditional functions of HRM greatly and also heighten the importance of HRM in enterprises. Enterprises that implement ERP must perfect the functions of HRM to raise the success rate, so to enhance the whole management level of enterprises. A poignant hope is that this article will be helpful to both scholars and practitioners who wish to improve the current situation of ERP implementation.
rEFErENcEs Ahmad, S., & Schroeder, R. G. (2003). The impact of human resource management practices on operational performance: Recognizing country and industry differences. Journal of Operations Management, 21, 19–43. doi:10.1016/S02726963(02)00056-6 Al-Marshari, M. (2002). Enterprise resource planning (ERP) systems: A research agenda. Industrial Management & Data Systems, 102(3), 165–170. doi:10.1108/02635570210421354
Ashbaugh, S., & Rowan, M. (2002). Technology for human resources management: Seven questions and answers. Public Personnel Management, 31(1), 7–20. Barrett, R., & Mayson, S. (2006). Exploring the intersection of HRM and entrepreneurship: Guest editors’ introduction to the special edition on HRM and entrepreneurship. Human Resource Management Review, 16(4), 443–446. doi:10.1016/j. hrmr.2006.08.001 Bellini, E., & Canonico, P. (2007). Knowing communities in project driven organizations: Analysing the strategic impact of socially constructed HRM practices. International Journal of Project Management, (September): 29. Boudreau, J. W., & Ramstad, P. M. (1997). Measuring intellectual capital: Learning from financial history. Human Resource Management, 36, 343–356. doi:10.1002/(SICI)1099050X(199723)36:33.0.CO;2W Bowen, D. E., & Ostroff, C. (2004). Understanding HRM-firm performance linkages: The role of the strength of the HRM system. Academy of Management Review, 29, 203–221. Classe, A. (2001). Business-collaborative commerce—the emperor’s new package. Accountancy, (November). Davenport, T. H. (1998). Putting the enterprise into the enterprise system. Harvard Business Review, (July-August): 121–131. Drucker, F. P. (1999). Management challenges for the 21st century (p. 97). New York: HarperCollins. Edwards, J. (1999). Three-tier client—server at work. New York: John Wiley & Sons. Enslow, B. (1996). Which comes first: ERP or supply chain planning projects? Gartner Group Best Practices and Case Studies.
1027
Implementation of ERP in Human Resource Management
Hsu, K., Sylvestre, J., & Sayed, E. N. (2006). Avoiding ERP pitfalls. Journal of Corporate Accounting & Finance, (May-June): 67–74. doi:10.1002/jcaf.20217
Pasmore, W.A., & Mlot, S. (1994). Developing selfmanaging work teams: An approach to successful integration. Compensation and Benefits Review, 26(4), 15–23. doi:10.1177/088636879402600403
Huselid, M. A. (1995). The impact of human resource management practices on turnover, productivity, and corporate financial performance. Academy of Management Journal, 38, 635–672. doi:10.2307/256741
Payne, W. (2002). The time for ERP? Work Study, 51(2), 91–93. doi:10.1108/00438020210418827
Koch, C. (2002). The ABCs of ERP. CIO Magazine, (February). Kumar, K., & Hillegersberg, J. V. (2000). ERP experiences and evolution. Communications of the ACM, 43(4), 24–26. doi:10.1145/332051.332063 Lai, W. H. (2006). Implementing e-HRM: The readiness of small and medium sized manufacturing companies in Malaysia. Asia Pacific Business Review, 12(4), 465–485. doi:10.1080/13602380600570874 Li, Y. F. (2001). Thoughts about the ERP human resource management. Journal of Yunnan University of Finance and Economic, 17(10), 12–16. Lynne, M. M., Axline, S., Petrie, D., & Cornelis, T. (2000). Learning from adopters’ experiences with ERP: Problems encountered and success achieved. Journal of Information Technology, 15, 245–265. doi:10.1080/02683960010008944 Moller, C. (2003). ERP II—next-generation extended enterprise resource planning. Proceedings of the 7th World Multi-Conference on Systemics, Cybernetics and Informatics. Nilakant, V. (1994). Transdisciplinary approach to a theory of performance in organizations. Human Systems Management, 13(1), 41–48. Palaniswamy, R. (2002). An innovation-diffusion view of implementation of enterprise resource planning (ERP) systems and development of a research model. Information & Management, 40, 87–114. doi:10.1016/S0378-7206(01)00135-5
1028
Pfeffer, J. (1995). Producing sustainable competitive advantage through the effective management of people. The Academy of Management Executive, 9, 55–69. Rual, J. M. H., Bondarouk, V. T., & Velde, M. (2007). The contribution of e-HRM to HRM effectiveness: Results from a quantitative study in A Dutch ministry. Employee Relations, 29(3), 280–291. doi:10.1108/01425450710741757 Schonberger, R. J. (1994). Human resource management lessons from a decade of total quality management and re-engineering. California Management Review, 36(4), 109–123. Shang, S., & Seddon, P. B. (2000). A comprehensive framework for classifying the benefits of ERP systems. Proceedings of the 6th Americas Conference on Information Systems (pp. 1005-1014). Siau, K. (2004). Enterprise resource planning (ERP) implementation methodologies. Journal of Database Management, 15(1), 1–4. Stone, D. L. (2007). The status of theory and research in human resource management: Where have we been and where should we go from here? Human Resource Management Review, 17(2), 93–95. doi:10.1016/j.hrmr.2007.04.005 Sun, X., & Xu, W. (2006). Research on ERP and enterprise’s human resources informatization. Science Technology Information Development & Economy, 16(7), 229–230.
Implementation of ERP in Human Resource Management
Wright, P. M., McMahan, G. C., Snell, S. A., & Gerhart, B. (2001). Comparing line and HR executives’ perceptions of HR effectiveness: Services, roles, and contributions. Human Resource Management, 40, 111–123. doi:10.1002/hrm.1002 Wright, S., & Wright, A. (2002). Information system assurance for enterprise resource planning systems: Unique risk considerations. Journal of Information Systems, 16, 99–113. doi:10.2308/ jis.2002.16.s-1.99 Yang, J.Y., & Zhao, X.W. (2003). Research on HRM in ERP system. Journal of Beijing Institute of Technology, (August), 73-77.
KEy tErMs AND DEFINItIONs Electronic Human Resource Management (e-HRM): The planning, implementation, and application of information technology for both networking and supporting at least two individual or collective actors in their shared performing of HR activities. Enterprise Resource Planning (ERP): An approach to the provision of business support software that enables companies to combine
the computer systems of different areas of the business—production, sales, marketing, finance, human resources, and so forth—and run them off a single database. Also defined as an application and deployment strategy to integrate all things enterprise-centric. Human Resource: The people that staff and operate an organization. Human Resource Management (HRM): The function within an organization that focuses on recruitment of, management of, and providing direction for the people who work in the organization. Information Technology (IT): The collection of technologies that deal specifically with processing, storing, and communicating information, including all types of computer and communications systems as well as reprographics methodologies. Manufacturing Resource Planning (MRP II): A method for the effective planning of all resources of a manufacturing company, including functions of business planning, production planning and scheduling, capacity requirement planning, job costing, financial management, forecasting, and so forth.
This work was previously published in Encyclopedia of Information Science and Technology, Second Edition, edited by Mehdi Khosrow-Pour, pp. 1856-1862, copyright 2009 by Information Science Reference (an imprint of IGI Global).
1029
1030
Chapter 4.10
A Study of Information Requirement Determination Process of an Executive Information System Chad Lin Curtin University of Technology, Australia Koong Lin Tainan National University of the Arts, Taiwan
INtrODUctION An executive information system (EIS) provides senior management with easy access to information relevant to their needs. It can spread horizontally across and vertically down to other organizational managers and provide three major types of benefits: information, management support, and organizational support (Salmeron, 2002). According to Salmeron, one key EIS success factor is the fulfillment of users’ information needs. However, the user information requirements determination (IRD) process during the implementation of an EIS remains a problematic exercise for most organizations (Walter, Jiang, & Klein, 2003). This is because IRD is the least DOI: 10.4018/978-1-59904-843-7.ch092
understood and least formalized yet most critical phase of the information systems development (ISD) process. This phase is so crucial that many information systems researchers argue that IRD is the single most important stage during an EIS project development process, and if the IRD is inaccurate and incomplete, the resultant system will also be inaccurate and incomplete. Hence, understanding the issues that influence the IRD process of EIS is of critical importance to organizations (Poon & Wagner, 2001). However, little is known about the issues that influence IRD processes during the implementation of an EIS project (Khalil, 2005). Therefore, this article aims to examine key issues surrounding the IRD process during the implementation of an EIS project in a large Australian public-sector organization. The article first reviews relevant literature with
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Study of Information Requirement Determination Process of an Executive Information System
respect to IRD and EIS. Key findings and issues identified from the case study are also presented. The chapter examines these findings and issues in light of these organizations’ IRD practices, and concludes by providing some lessons for EIS project implementation.
or inaccurate requirement specifications, lack of user involvement, lack of flexibility of computerbased information systems, poor communication, different worldviews of the systems analysts, and other factors (Guinan, Cooprider, & Faraj, 1998; Kirsch & Haney, 2006). Each of these will be discussed briefly in the subsections that follow.
bAcKGrOUND
Incomplete and/or Inaccurate requirements specifications
IRD is a critical phase of ISD. IRD is primarily concerned with specific applications such as EIS. IRD has generated a lot of interest and debate among researchers and practitioners as a potential means for improving the success rates of ISD projects such as EIS (Havelka, 2002; Wu & Shen, 2006). The IRD process, which Browne and Ramesh (2002, p. 625) defined as “a set of activities used by a systems analyst when assessing the functionality required in a proposed system,” has become increasingly important in obtaining the correct and complete set of user requirements. A number of tools and techniques have been proposed to support the IRD process during the EIS project: prototyping, joint application development (JAD), rapid application development (RAD), data flow diagrams (DFDs), and entity relationship diagrams (ERDs; Duggan & Thachenkary, 2004; Spina & Rolando, 2002). However, despite the existence of all these techniques and tools, the history of ISD has been littered with numerous reports of the complete failure of EIS projects (Khalil, 2005). The common causes of these failures stem largely from difficulties in dealing with the information requirements (Browne & Ramesh, 2002; Davis, 1987). In many cases, budget blowouts and missed deadlines occur. Too often, initial design and programming is followed by a reassessment of needs, redesign, and then more programming (Urquhart, 2001). Many EIS project failures have little to do with technical or programming issues. The source of many of these problems lies with one or a combination of the following major factors: incomplete and/
This can often lead an organization to address the wrong problem or identify incorrect information needs. Dissatisfaction of the stakeholders with their IS derives from the problem of specifications not being stated accurately and/or completely (Davidson, 2002; Khalil, 2005). This can also arise from users having totally unrealistic expectations of the final EIS. Therefore, incomplete and inaccurate requirements specifications can often result in identifying the wrong information needs or addressing the incorrect IRD problem. This may ultimately lead to EIS project failures. According to Browne and Ramesh (2002), the following challenges should be recognized by both analysts and users when they are dealing among themselves: • • •
•
There can never be a complete, correct set of user information requirements. Requirements are not stable over time, but are in a constant process of evolution. The facilitation skills of systems analysts are crucial to the effective management of the IRD process. Systems analysts work in highly political contexts.
Lack of User Involvement One of the major factors contributing to the failures of EIS projects is the lack of user involvement. By failing to be involved during the system development stages, users might feel frustrated and
1031
A Study of Information Requirement Determination Process of an Executive Information System
disillusioned when they perceive new technologies such as EIS as the threatening creations of outsiders (Robertson & Robertson, 1999). This usually results in resistance and conflicts between the project sponsors, the systems analysts, and the users (Davidson, 2002). Lack of user involvement often results in distrust between the users, the systems analysts, and the project sponsors. Users feel unable to specify what they want because they do not know what is possible while the systems analysts try to explain what is possible but describe it in ways not understood by the users (Browne & Rogich, 2001; I. Wu & Shen, 2006). This usually not only reduces job satisfaction on both sides but also leads to less-than-adequate systems design (Alvarez, 2002).
Poor communication Poor communication between users and analysts is also a major factor contributing to the failure of EIS (Urquhart, 2001). Communication skills of systems analysts have a significant impact on successful and complete information requirements of EIS. Some of the most important reasons for communication difficulties are as follows (Douglas, 2003; Guinan et al., 1998; Urquhart): • •
•
Lack of Flexibility of computerbased Information systems Computer-based information systems (e.g., EIS) often lack the flexibility to meet changing user information requirements and have little interaction with the manual systems (Salmeron, 2002; I. Wu & Shen, 2006). These are often due to the way computers have to be programmed, in which any change that involves a change to the program requires a detailed sequence of steps to be taken, which can be time consuming and disruptive. Some changes, even changes that appear trivial to the nonexpert user, cannot be incorporated in the system without a substantial redesign of the computerized parts of the system (Lauesen & Vinter, 2001; Sutcliffe, 2000). Moreover, since the organizations and the people within them are dynamic and constantly changing all the time, a computer-based information system that takes too long to finish will not be able to meet users’ needs and hence will become a major stumbling block to the success of the EIS.
1032
•
The different perspectives of the different stakeholders involved in a system study Uncertainty on the part of the users of the impact the final system will have on their individual roles in the organization The observation that the user operates with informal systems and that the formal procedure of the existing systems has been overtaken by less formal, unauthorized procedures The problem facing both users and systems analysts that new systems almost certainly include technological innovations
Worldview of the systems Analysts The education and practice of systems analysts can also be the source of the problems when dealing with IRD processes since few systems analysts are equipped to deal with the essentially social nature of IS. The systems analysts tend to think that they are the experts who analyze the problem, define it, and provide the solution (Berry, 2002). Many of the problems of ISD projects such as EIS can be attributed to organizational behavioral problems. These behavioral problems are the result of bad designs. These bad designs are attributed to the way systems analysts view organizations, their users, and the function of ISD.
A Study of Information Requirement Determination Process of an Executive Information System
Other Factors There are also some other significant factors that might affect the success of ISD projects. These include an inaccurate assessment of the scope of the problem and broader organizational issues, poor budget control, a delay in the development of applications, difficulty in making changes, hidden backlog, program and software bugs, systems that cost much more to develop and maintain than expected, and development processes that are not dynamic (Alvarez, 2002; Browne & Ramesh, 2002; Havelka, Sutton, & Arnold, 2001).
rEsEArcH MEtHODOLOGy The objective of this research is to examine key issues of the user-requirement determination process during the EIS project development process. An in-depth case study was carried out in one large Australian public-sector organization involved in the implementation of an EIS project. The organization was responsible for providing an important education service within Australia. It had an annual turnover of A$500 million and about 3,000 employees. In order to meet the necessary educational quality requirements and guidelines set out by the Australian government, the organization had decided to implement an EIS to assist it in making proper decisions. The objectives of the EIS were to (a) support organizational reporting in the areas of program and planning review, annual reporting, and benchmarking and best practices, (b) support the organization in its undertaking of quality-related activities, and (c) identify deficiencies in data sources. Initially, the researchers had attended six sessions of the IRD process between the external systems analyst and the key users. On completion of all these sessions, the researchers refined and modified the interview questions, which were drafted before these sessions. Then, 16 interviews were conducted with nine key participants, and
these included two main sponsors of the EIS project, an external systems analyst, and six key users of the EIS. The interviews focused on the EIS project development process, different stakeholders’ views of the EIS, the IRD process, and the evaluation process of the EIS. Each interview lasted between 1 to 2 hours. All interviews were taped and the transcripts were sent to the interviewees for validation. In cases where there were differences in opinion between participants, either follow-up interviews were conducted or e-mails were sent to clarify their positions. Other data collected included some of the actual project proposals and detailed requirements specifications for the EIS project, planning documents, and some meeting minutes. More than 300 pages of transcripts were coded and analyzed. The data collection at this organization continued until a point of theoretical saturation, which is when the value of an additional interview was considered to be negligible (Eisenhardt, 1989). Qualitative content analysis was then used to analyze the data gathered (Miles & Huberman, 1994). The analysis of the materials was also conducted in a cyclical manner and the issues identified were doublechecked by the researchers and other experts. The guidelines (i.e., multiple interpretations) set out by Klein and Myers (1999) for conducting and evaluating interpretive field studies in information systems were followed to improve the quality of the research.
rEsEArcH FINDINGs A number of issues emerged from the analysis of the data and some of the key issues surrounding the IRD process of the EIS project are presented below in some detail. Related information from the observation and document review has been integrated into the discussion to further support the findings.
1033
A Study of Information Requirement Determination Process of an Executive Information System
theme 1: Problems in the Using IrD Methodology The interview data suggest that there was a general agreement among the users that no ISD/IRD methodology, tool, or problem-solving methodology had been used by the external systems analyst during the IRD process with users for the EIS project. Instead, only an interview was carried out by the external systems analyst to gather the required information from the users. For example, one user said, “It felt very much like questions to which I was responding because it’s an interview like that....I didn’t feel that I was settling into something that I was participating in. So it’s very much like a question and answer.” The user had expected some sort of methodology to be used by the systems analyst during the IRD sessions. The researchers’ observation had supported their claims. Some of the users suggested that the use of a proven methodology and diagram would be valuable for the IRD process. However, the sponsors and systems analyst claimed that some sort of methodologies had been used during the IRD sessions, although this had not been observed by the researchers. For example, the systems analyst said, “I worked loosely to various methodologies, sort of used in the past, in particular, Arthur Andersen’s Method One and APT. But they tended to direct more on experience and referencing the documents.” Furthermore, the systems analyst went as far as saying that the use of diagrams such as DFDs and ERDs would confuse the users. Most of the users interviewed by the researchers had rejected this claim.
theme 2: Lack of User Involvement All users indicated that their contributions to the IRD sessions had been hampered by the lack of information. In addition, rather than having several IRD sessions with the systems analyst, most users suggested that a group session would be far more effective as it tended to create synergy among the
1034
users. The users felt that their ability to participate in the IRD process could be enhanced by having such a group session. Instead, the IRD process for this EIS project was, as perceived by the users, merely a questionand-answer exercise. Although the users were given the opportunity to raise any questions and concerns about the existing systems as well as the forthcoming EIS, the problem was that there was no prior information given to the users before the IRD sessions. The users felt that they were not given any time and information to prepare for the meetings with the systems analyst. The problem was compounded by the lack of follow-up by the systems analyst. The users did not take part in the rest of the EIS project and were critical of the project sponsors and the systems analyst for not consulting them about the project. The researchers were told privately by one of the project sponsors that the systems analyst was instructed not to involve the users further in other phases of the project. The project sponsors were getting impatient with some of their users regarding their information requirements.
theme 3: Lack of User satisfaction Most users were unhappy with the IRD process of this EIS project and were not impressed by the performance of the project sponsors and, in particular, the systems analyst. For example, one user was very critical of the project sponsors and the systems analyst and said, “I think what they need to do is to give the user an understanding of what they have envisaged the EIS system should be able to do and where it fits….Also, they should articulate in a way that someone who is not a systems person can understand.” None of the users were given enough information and time to prepare for the IRD process. For example, one user complained and said, “If people are going to be involved [in the IRD process], they need to know why...” The problem had been compounded by the instruction by the project sponsors not to
A Study of Information Requirement Determination Process of an Executive Information System
spend too much time listening to the requirements of the users, and also the fact that the scope of the project was unclear.
theme 4: Lack of Project scope Both users and the systems analyst complained about the lack of scope and information for this EIS project. Some of the ideas put forward by the users included the following: (a) A group session should be deployed to elicit users’ requirements and needs, (b) more research should be conducted by the systems analyst before the IRD process, and (c) more information about the purpose of the visits by the systems analyst should be given beforehand. As mentioned previously, the reason for not giving the proper information to the users before the meetings could be due to the fact that the instruction given by the project sponsors to the systems analyst was to finish the IRD phase as soon as possible. For example, the systems analyst said, “Problems that I had with this particular case is not being so much with gathering of information requirements from users....The problem I had with IRD is perhaps, not being able to maintain a limited scope.” The systems analyst was having difficulty in maintaining a limited scope of the EIS project and hence was not able to tell the users exactly what the project was going to be like.
theme 5: culture and Politics Several users pointed out that the culture and politics within the organization forced many employees to be disillusioned about the whole process as they felt that they could not make any difference. For example, one user complained about the culture and politics that existed within the organization that were the cause for users not being consulted about the implementation of new projects such EIS. This had often led to project failures. For example, he said, “Now I hope we don’t end up with yet another project failure. On
past records, chances are we will. And when that happens, everyone will pass the buck. The MIS type of people will say, ‘but I’ve fulfilled what you have told us.’” All users felt that this had been repeated to some extent in this EIS project. A good example of this was the lack of information given to the users by the systems analyst before the IRD session. The EIS project had also appeared to be plagued with politics. Many users interviewed were unhappy with the way that the project sponsor had been given a significant role in this EIS project. On the other hand, the project sponsors also revealed that they were getting impatient with some of the users within the organization. The project sponsors had admitted to the researchers that they did not get along with some of the users. To make the matter worse, the systems analyst also agreed with the view expressed by some of the users that this EIS project was likely to fail as a result of the prevailing culture and politics existing within the organization. Both the systems analyst and the users had seen many ISD project failures before, both within and outside the organization. Overall, most of the key issues identified from this study are largely consistent with the literature. However, the research has further identified that lack of user satisfaction and the organizational culture and politics also have a major impact on the success of the implementation of EIS projects.
FUtUrE trENDs During the last decade, the names of information systems have changed from executive information systems to business intelligence (BI) systems (J. Wu, 2000). BI is a significant trend of EIS as the technology has significantly evolved from internally developed graphical user interfaces to packaged applications that provide users with easy access to data for analysis. BI is defined as the process of monitoring and analyzing business transactions by using business intelligence
1035
A Study of Information Requirement Determination Process of an Executive Information System
to align business operations with the tactical and strategic goals of the organization. In addition, BI encompasses software for extraction, transformation, and loading (ETL); data warehousing; multidimensional or online analytical processing (OLAP); data analysis; and data mining. However, there are still some challenges to overcome before BI can be used and implemented more widely. These include recognizing BI projects as cross-organizational business initiatives, engaging business sponsors, and developing an automated Web intelligence system to extract actionable organizational knowledge by leveraging Web content.
ments for the EIS, the lack of user involvement, and user dissatisfaction. It was also surprising to hear from the systems analyst himself and most users that they were not very optimistic that this EIS project would succeed due to a long history of ISD project failures within the organization. A contribution of this short article is that it has further identified that a lack of user satisfaction and issues regarding the organizational culture and politics have a major impact on the success of the implementation of EIS projects.
cONcLUsION
Alvarez, R. (2002). Confessions of an information worker: A critical analysis of information requirements discourse. Information and Organization, 12, 85–107. doi:10.1016/S1471-7727(01)00012-4
This case study illustrates the dynamic relationships between project sponsors, users, and the systems analyst during the IRD process of an EIS project. Most of the users’ complaints were centered on the difficulties in giving accurate and complete requirements to the systems analyst during the IRD process. Their difficulties not only stemmed from the inability of the users to specify what they wanted, but were also affected by the attitude of the systems analyst and project sponsors of the EIS project toward the opinions of the users. The results also indicated that there were discrepancies between what the systems analyst said about what he did (espoused theory) and what he actually did (theory in use) during the IRD process. For example, the systems analyst had insisted that some sort of formal methodology was used to elicit user requirements when in fact there was none. Moreover, this research has found that there were significant differences in opinion between the users and the systems analyst. For example, although there was a high degree of agreement about the lack of project scope and the existence of issues in culture and politics, there were significant disagreements about the deployment of the IRD methodology for gathering information require-
1036
rEFErENcEs
Berry, D. M. (2002). The importance of ignorance in requirements engineering: An earlier sighting and a revisitation. Journal of Systems and Software, 60, 83–85. doi:10.1016/S0164-1212(01)00103-0 Browne, G. J., & Ramesh, V. (2002). Improving information requirements determination: A cognitive perspective. Information & Management, 39, 625–645. doi:10.1016/S0378-7206(02)00014-9 Browne, G. J., & Rogich, M. B. (2001). An empirical investigation of user requirements elicitation: Comparing the effectiveness of prompting techniques. Journal of Management Information Systems, 17(4), 223–249. Davidson, E. J. (2002). Technology frames and framing: A social-cognitive investigation of requirements determination. MIS Quarterly, 26(4), 329–358. doi:10.2307/4132312 Davis, G. B. (1987). Strategies for information requirements determination. In R. D. Galliers (Ed.), Information analysis: Selected readings (chap. 13). Sydney, Australia: Addison-Wesley.
A Study of Information Requirement Determination Process of an Executive Information System
Douglas, H. (2003). A user-oriented model of factors that affect information requirements determination process quality. Information Resources Management Journal, 16(4), 15–32.
Lauesen, S., & Vinter, O. (2001). Preventing requirement defects: An experiment in process improvement. Requirements Engineering, 6(1), 37–50. doi:10.1007/PL00010355
Duggan, E. W., & Thachenkary, C. S. (2004). Supporting the JAD facilitator with the nominal group technique. Journal of Organizational and End User Computing, 16(2), 1–19.
Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook. CA: Sage Publications.
Eisenhardt, K. M. (1989). Building theories from case study research. Academy of Management Review, 14(4), 532–550. doi:10.2307/258557 Guinan, P. J., Cooprider, J. G., & Faraj, S. (1998). Enabling software development team performance during requirements definition: A behavioral versus technical approach. Information Systems Research, 9(2), 101–125. doi:10.1287/isre.9.2.101 Havelka, D. (2002). Requirements determination: An information systems specialist perspective of process quality. Requirements Engineering, 6(4), 220–236. doi:10.1007/PL00010361 Havelka, D., Sutton, S. G., & Arnold, V. (2001). Information systems quality assurance: The effect of users’ experiences on quality factor perceptions. Review of Business Information Systems, 5(2), 49–62. Khalil, O. E. M. (2005). EIS information: Use and quality determination. Information Resources Management Journal, 18(2), 68–93. Kirsch, L. J., & Haney, M. H. (2006). Requirements determination for common systems: Turning a global vision into a local reality. The Journal of Strategic Information Systems, 15, 79–104. doi:10.1016/j.jsis.2005.08.002 Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Quarterly, 23(1), 67–94. doi:10.2307/249410
Poon, P. P., & Wagner, C. (2001). Critical success factors revisited: Success and failure cases of information systems for senior executives. Decision Support Systems, 30, 173–418. doi:10.1016/ S0167-9236(00)00069-5 Robertson, S., & Robertson, J. (1999). Mastering the requirements process. Harlow, England: ACM Press. Salmeron, J. L. (2002). EIS evolution in large Spanish business. Information & Management, 40, 41–50. doi:10.1016/S0378-7206(01)00130-6 Spina, M. J., & Rolando, J. A. (2002). JAD on a shoestring budget: CrossTalk. The Journal of Defense Software Engineering, 15(7), 26–28. Sutcliffe, A. G. (2000). Requirements analysis for socio-technical system design. Information Systems, 25(3), 213–233. doi:10.1016/S03064379(00)00016-8 Urquhart, C. (2001). Analysts and clients in organizational context: A conversational perspective. The Journal of Strategic Information Systems, 10, 243–262. doi:10.1016/S0963-8687(01)00046-4 Walter, B. A., Jiang, J. J., & Klein, G. (2003). Strategic information and strategic decision making: The EIS/CEO interface in smaller manufacturing companies. Information & Management, 40, 487–495. doi:10.1016/S0378-7206(02)00063-0 Wu, I., & Shen, Y. (2006). A model for exploring the impact of purchasing strategies on user requirements determination of e-SRM. Information & Management, 43, 411–422. doi:10.1016/j. im.2004.11.004
1037
A Study of Information Requirement Determination Process of an Executive Information System
Wu, J. (2000, February 10). Business intelligence: What is business intelligence? DM Review, 1-2.
KEy tErMs AND DEFINItIONs Business Intelligence (BI): It is the process of monitoring and analyzing business transaction processes to ensure that they are optimized to meet the business goals of the organization. Data Mining: It is an information extraction activity whose goal is to search large volumes of data for patterns and discover hidden facts contained in databases. Data Warehouse: It is a relational database that is designed for query and analysis, and usually contains historical data that are derived from transaction data.
Executive Information System (EIS): It provides organizations with a powerful yet simple tool to view and analyze key factors and performance trends in the areas of sales, purchasing, production, finance, and so forth. Information Requirements Determination (IRD): It is a set of activities used by a systems analyst when assessing the functionality required in a proposed system. Joint Application Development (JAD): It is a process originally developed for designing a computer-based system. It brings together business users and IT professionals in a highly focused workshop. Rapid Application Development (RAD): It is a methodology for compressing the analysis, design, build, and test phases into a series of short, iterative development cycles.
This work was previously published in Encyclopedia of Decision Making and Decision Support Technologies, edited by Frederic Adam and Patrick Humphreys, pp. 807-813, copyright 2008 by Information Science Reference (an imprint of IGI Global).
1038
1039
Chapter 4.11
Towards Identifying the Most Important Attributes of ERP Implementations Piotr Soja Cracow University of Economics, Poland Dariusz Put Cracow University of Economics, Poland
AbstrAct Enterprise resource planning (ERP) systems have been implemented in various and diverse organizations. The size of companies, their industry, the environment, and the number of implemented modules are examples of their heterogeneity. In consequence, a single procedure which leads to the success of implementation does not appear to exist. Therefore, there have been many implementations that have failed during, and also after, the implementation process. As a result, a considerable amount of research has been trying to identify issues influencing ultimate project success and also to recognize the best implementation projects. The aim of this work is to identify the most important
characteristics of ERP implementation which affect project success. This study builds on data gathered using a questionnaire directed toward people playing leading roles in ERP implementations in a few dozen companies. Twelve attributes were identified and divided into three sets representing: effort, effect, and the synthetic measure of success calculated on the basis of the obtained data. Two agglomeration methods were employed to identify exemplar and anti-exemplar groups and objects. These elements were thoroughly analyzed, which led to identifying the most and the least desired attributes of an ERP implementation project. The findings are discussed and related with the results of prior research. Finally, implications for practitioners and concluding remarks summarise the chapter.
DOI: 10.4018/978-1-60566-146-9.ch007
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Towards Identifying the Most Important Attributes of ERP Implementations
INtrODUctION The implementation of an ERP system is a great challenge for a company making the effort of introducing such a system into its organisation. The implementation project is usually connected with sizeable expenses for computer software and hardware, as well as for the implementation services provided by a system solution supplier (e.g., Sarkis & Gunasekaran, 2003). The implementation effects could be very diverse, beginning from the considerable enhancement of enterprise activity and increase of its profitability, to the rejection of the system introduced (e.g., Holland et al., 1999; McNurlin & Sprague, 2002). The companies introducing ERP packages into their organisations differ quite significantly. The implementation endeavours called ERP projects comprise both simple installations of single modules of a system and complex solutions dealing with the installation of many system modules in numerous units of a company (Parr & Shanks, 2000). Therefore, ERP implementation projects form a very diverse population and in order to compare particular implementations, one has to keep this diversity in mind so that such a comparison is reasonable (e.g., Stensrud & Myrtveit, 2003). Thus, it seems appropriate to group purposefully implementation projects into homogenous collections, where the comparison of projects is feasible and sensible. Only in this situation can we talk about a “model” implementation project and examine the project discovered in order to reveal the most needed characteristics. Among the methods of projects grouping suggested by prior studies, there are those employing company size (e.g., Bernroider & Koch, 2001; Buonanno et al., 2005; Everdingen et al., 2000; Loh & Koh, 2004) and those relying on a criterion of the number of user licenses (Sedera et al., 2003). While previous research indicates that company size is an important criterion influencing ERP project conditions, the results regarding the benefits achieved are mixed. Some research
1040
works suggest that benefits gained by large and small sized organisations seem to be similar (e.g., Shang & Seddon, 2000; Soja, 2005) and other studies advocate that benefits differ by company size (Mabert et al., 2003). Prior studies also suggest other criteria of ERP projects grouping that might influence implementations’ conditions. These criteria include the extent of ERP package modification (Soh & Sia, 2005), implementation scope and duration time (Soja, 2005, 2006). The results imply that the implementations’ conditions are diverse depending on project type defined by dividing criteria. Moreover, the project type can have an impact on the effects achieved by a company as a result of ERP implementation. In particular, the project duration seems to have an important influence on achieved results (Soja, 2005). The multitude of potential factors influencing ERP projects is illustrated by the complex division presented by Parr and Shanks (2000). They suggest the following categories for the division of projects: implementation physical scope (single or multiple site), extent of organisational changes, level of system modification, module implementation strategy, and allocated resources in terms of time and budget. Taking into consideration the above-mentioned criteria of a division, there are a great many implementation types. Therefore, Parr and Shanks distinguish three main categories of ERP implementations: comprehensive, averagely complicated (middle-road) and simple (vanilla). Overall, it seems that it is hard to find a generally accepted division of ERP projects into groups, which would constitute homogenous collections of similar implementations. Prior studies suggest various criteria of ERP projects grouping and these divisions take into consideration merely the variables defining the efforts made in order to implement a system, but they completely omit the issue of achieved effects. Meanwhile, incorporating the parameters describing implementation results could lead to interesting conclusions.
Towards Identifying the Most Important Attributes of ERP Implementations
The goal of this paper is an attempt to discover the most desired attributes of a model ERP implementation project. The article is based on research conducted among a few dozen companies introducing an ERP system into their organisations in Poland. In order to achieve the paper’s goal, the statistical methods of element grouping were employed, which allowed us to extract the groups of homogenous projects that were then ordered on the basis of the measure of achieved success. This procedure allowed us to distinguish the projects having the most desirable characteristics, as well as those with the least desirable attributes.
LItErAtUrE rEVIEW The idea of discovering issues determining the success of enterprise system (ES) implementation projects attracted the attention of a considerable number of researchers. There are a great many research approaches which investigate numerous issues and employ various understandings of project success. The works differ in various aspects, i.e. the employed methodologies and chosen variables. Some scholars examine the actual implementation process, others focus on the post-implementation phase, there are also those who treat ES adoption as a continuous endeavour without a clearly defined end point. The investigated issues vary from technological, through organisational to those connected with people, and from operational to strategic considerations. The following section summarises major findings achieved by prior research connected with the attributes and characteristics of successful ES adoption projects. Among many issues analysed by researchers, knowledge seems to play a paramount role in enterprise system adoption. In particular, the researchers emphasize the need of adequate knowledge transfer from external consultants to clients in an adopting organisation (Ko et al., 2005). This process should be dealt with by sev-
eral parties participating in ES adoption: vendors, consultants and IS/IT managers. They should pay attention to improve not only the quality of ERP products, but also user’s knowledge and involvement. They should also be aware of the importance of choosing suitable consultants and suppliers (Wu & Wang, 2007). Furthermore, emphasizing the role of knowledge transfer, McGinnis and Huang (2007) introduced the idea of constituting a knowledge-sharing community which may play the crucial role of a platform that can be used to provide a common frame of reference to all ERP activities. The authors also highlight the concept that the actual implementation does not finish the whole adoption endeavour, which should be still monitored and handled after the system rollout. The quality of the actual implementation process influences the course of the post-implementation phase. In particular, Nicolaou (2004) suggests five critical dimensions which affect the whole adoption project, including the post-implementation stage. These issues relate to the review of overall project scope and planning, the review of driving principles for project development, the effectiveness of misfit resolution strategies, the evaluation of attained benefits, and the evaluation of learning. The enterprise system adoption projects differ greatly as regards the scope of the implementation. In general, the most complicated full-scope and highly-integrated enterprise system adoption projects are perceived as more likely to bring the best benefits for the company. This issue is illustrated by Ranganathan and Brown (2006), who conclude that a company’s announcement of ERP adoption positively influences stock market returns. The authors suggest that the greater the project’s scope (in terms of number of locations and the extent of introduced functionality) the stronger the influence. Researchers highlight the need for identifying adequate characteristics of enterprise system in order to achieve its successful adoption. The enumerated features which appear to be the most important encompass system quality (Kositanurit
1041
Towards Identifying the Most Important Attributes of ERP Implementations
et al., 2006) together with other quality dimensions related to information and service (Chien & Tsaur, 2007), and perceived usefulness of the system (Amoako-Gyampah, 2007). In particular, on the basis of research conducted among 571 respondents and employing Technology Acceptance Model, Amoako-Gyampah (2007) suggests that users’ intention to use an ERP might depend more on how useful they perceived the system than on their level of involvement or how easy it will be to use the system. Furthermore, the author advocates that perceived usefulness influences the users’ involvement and beliefs concerning the necessity of changes. However, on the other hand, Amoako-Gyampah points out the role of legacy systems and related people’s habits and expertise, which may negatively influence their attitudes towards ERP. Identifying the desired features of the enterprise system is a necessary condition for successful adoption. However, equally important is the appropriate implementation process, during which one of the most crucial issues is to achieve an alignment between the system’s capabilities and the company’s needs. Sharma and Vyas (2007) emphasize this issue by talking about the synergy between technology and management, which is advocated as an important element influencing the success of ERP adoption. Enterprise system adoption is inevitably connected with some extent of organisational changes which are carried out in the adopting company. The more radical approach to organisational changes is called business process reengineering (BPR), while the less radical method bears the name of business process improvement (BPI) (Law & Ngai, 2007). Schniederjans and Kim (2003), on the basis of a survey conducted among 115 US electronic manufacturing firms, advocate that business process change should precede the enterprise system implementation. Furthermore, Law and Ngai (2007), drawing for the experience of 96 companies, conclude that during enterprise system adoption firms should undertake business
1042
process improvement and ought to have dual focus: operational and strategic. The authors demonstrate that firms which had only operational focus achieved lower performance than those having dual or strategic focus. In short, the discussion above illustrates that enterprise system adoption should be first and foremost an organisational and business project. It should be treated as a business-led endeavour, in contrast to an IT related initiative (e.g., Law & Ngai, 2007; Nicolaou, 2004; Tinham, 2006). The ES adoption project needs constant monitoring as regards employed resources, both internal and external. As far as internal resources are considered, Peslak (2006), on the basis of the opinions of over 200 top financial executives, advocates that cost and time were found to be major determinants of project success and should be carefully measured and monitored. Further, the author also points out that the use of external consultants for implementation should be carefully controlled since it was found to adversely impact cost performance. The enterprise system adoption projects form a very diverse group differing in several aspects. This issue is discussed by Stensrud and Myrtveit (2003), who examined 30 ERP projects using Data Envelopment Analysis Variable Returns to Scale (DEA VRS) to measure the productivity of software projects and employed the method to identify outstanding projects of ERP systems that may serve as role models. As a result, they suggest that the average efficiency among investigated projects is approximately 50%. Furthermore, the authors notice that there were significant differences in productivity between projects in various industries. Therefore, one should exhibit caution when benchmarking and comparing projects across industries. The authors suggest that performance assessments should include both productivity and quality indicators, and it should also take into account other external factors such as schedule constraints.
Towards Identifying the Most Important Attributes of ERP Implementations
As regards the difficulties with the comparison of different ES adoption projects, the impact of organisational size on ES project conditions was studied by several researchers. The majority of authors defined organisational size taking into consideration the number of employees; other studies understood the size of a company in terms of the level of revenues (Mabert et al., 2003) or defined the size of an ES project as a number of installed licences of the system (Sedera et al., 2003). While investigating ES projects, the scholars employed research approaches based on case studies, interviews and surveys. Their respondents were mainly adopters; however, some studies also enquired system supplier representatives (e.g., Mabert et al., 2003). The results of prior works illustrate that in the case of small firms, the most important issues comprise available human resources and system fit into company’s organisation (Bernroider & Koch, 2001; Buonanno et al., 2005; Mabert et al., 2003; Muscatello et al., 2003; Raymond & Uwizeyemungu, 2007), which result in a shorter implementation time, lower costs, and lack of the need for significant organisational changes (Adam & O’Doherty, 2000; Bernroider & Koch, 2001; Mabert et al., 2003). The paramount significance of human resources in the case of small and medium-size companies is expressed by Sun et al. (2005), who claim that a great emphasis during ERP implementation should be placed on people. It is worth noting that the issues connected with organisational changes are not perceived unambiguously by researchers: some point out the lack of willingness to perform organisational changes in the case of small firms, others claim that small companies are more likely to change their processes to fit the system (Mabert et al., 2003). The cited research works also suggest that benefits realized by small and large companies differ: larger firms achieve greater benefits, with a special emphasis on a financial indicators improvement, while smaller companies accomplish first and foremost the improvement of manufacturing
and logistics activities. The idea of limited benefits achieved by small and medium-size companies is reflected in the research of Sun et al. (2005), who conclude that as the implementation schedule increases, the cost increases accordingly, while the achievement increases to some point, beyond which there is no significant achievement benefit.
rEsEArcH MEtHODOLOGy This study is based on exploratory research conducted among enterprises introducing an ERP package into their organisations. A field study was adopted as a general research approach, and a questionnaire was employed as a data-gathering technique (e.g., Boudreau et al., 2001). The research question posed in this study could be expressed as follows: What are the most desired attributes of an ERP implementation project? The research questionnaire was comprised of questions with a mixture of scale, multiple choice and open questions. The purpose of these queries was to provide demographic data and details necessary to assess project conditions and implementation effects. The list of respondent enterprises was prepared on the basis of reports analysing the ERP market in Poland and databases containing companies’ address data. The resulting list contains firms that introduced an ERP package into their organisations with a broad scope, estimated on the basis of available data. The questionnaire was directed toward the people playing leading roles (the project leader, if possible) in the implementation. With the help of the questionnaire, data has been gathered regarding the conditions of implementation projects, efforts incurred, as well as results achieved. The collected data contain various pieces of information regarding both the implementation process and achieved results. Part of the data contains objective items, while the other (subjective) include respondents’ individual evaluations. The achieved collection of projects is varied; hence, an attempt to identify the
1043
Towards Identifying the Most Important Attributes of ERP Implementations
model object, having the most desired attributes which led to the completion of implementation goals, is not an easy task. The group of objects was characterised by 12 attributes, which were divided into 3 distinct subsets. In the first subset, there were input indicators of an implementation process—let us call them “effort indicators”. The second group was comprised of variables relating to the implementation results—called “effect indicators” (see Table 1). The third, one-element subset, contained the calculated variable being a synthetic measure of implementation success, which was calculated on the basis of data gathered from the enterprises (Soja, 2004). In the next stage of the research, this measure was used to establish the hierarchy of the groups. The success synthetic measure, based on the understanding of success in the information systems domain (e.g., Lyytinen, 1988), employs 5 partial measures: (1) the actual scope of an implementation with respect to the planned implementation, (2) the actual duration with respect to the assumed duration, (3) financial budget with regard to the planned budget, (4) users’ level of satisfaction from the system introduced, and (5) the existence and achievement of project goals (Soja, 2006).
Table 1. Implementation effort and effect variables Effort variables • Company size measured by number of employees (Size) • Planned duration time of an implementation (PD) • Zero-one variable bearing information whether the MRP Explosion module was implemented (MRP) • Number of implemented modules except MRP Explosion (Mod)
1044
Effect variables • Actual duration time of an implementation (AD) • Measure of financial budget spending with regard to the planned budget (Bud) • Implemented scope (Scope) • User satisfaction indicator (US) • Level of achievement of project goals (Goal) • Subjective measure of positive effects of an implementation (PE) • Subjective measure of negative effects of an implementation (NE)
In the research conducted, two agglomeration methods were employed: hierarchical Ward’s method (e.g., El-Hamdouchi & Willett, 1986), which is the most commonly used agglomerative method employed for forming clusters (Everitt, 1993) and non-hierarchical k-Means method. The aim of Ward’s method is to join objects together into ever increasing sizes of clusters using a measure of similarity of distance. K-means, on the other hand, is a simple non-parametric clustering method that minimises the within-cluster variability and maximises the between cluster variability. The k-means method requires that the number of clusters is specified beforehand (National Statistics 2001 area classification, 2001). One possibility for obtaining the number is to run Ward’s method and use the outcome as initial configuration for k-means. Since in the k-Means method a researcher has to arbitrarily provide the number of clusters, the two-phased approach is common in cluster analysis research. In the first stage, the hierarchical method is applied in order to determine a preliminary number of clusters (e.g., Ward’s method), and in the second step the actual classification of objects using the k-Means method takes place (e.g., Everitt et al., 2001). This approach was adopted during this research and is illustrated in Figure 1. The procedure is aimed at the separation of object groups which are similar to each other but differing to a greater extent from the objects belonging to the remaining groups (e.g., Kaufman & Rousseeuw, 1990). Firstly, the standardization of variables was carried out, which allowed us to remove the excessive influence of variables having a wide range of values from the outcome of the research. In the next step, the Ward’s hierarchical grouping method was applied. On the basis of a distance diagram obtained, two decisions were made: 1.
Some objects were excluded from further processing. The objects most dissimilar to other items or those forming small, two-
Towards Identifying the Most Important Attributes of ERP Implementations
Figure 1. Research model
2.
element groups, were treated as accidental measurement. Their exclusion allowed us, at the next stage, to receive more homogenous clusters, containing objects the most similar to each other and laying closer to the hypothetical centre of a cluster. The k value was selected for the applied kMeans method. The greater the k, the more the clusters. These clusters tend to be smaller and contain more similar objects. A small k means, on the other hand, fewer groups and more diverse objects within each subset. In order to determine the k value, the distance diagram achieved with the use of Ward’s method was employed. After excluding objects dissimilar to other items, the visual analysis of number clusters in the diagram was performed.
During the next stage, the k-Means method was applied. The calculations were performed three times: (1) for effort indicators only, (2) for effect indicators only, and (3) for all eleven indicators of effort and effect together. As a result, the separated groups of similar objects were extracted, together with the distance of each object from the hypothetical centre of a cluster. For each
cluster, the average value of success measure was calculated using synthetic success measures evaluated for objects belonging to a particular group, and on the basis of this value, hierarchy of the group was determined. The cluster having the greatest average value of success measures was recognised as containing objects with the most desired characteristics. Simultaneously, the cluster with the least average value of success measures was recognised as having objects with the least desirable attributes. Within each of these two extreme groups, one object having the smallest distance from the hypothetical centre of a cluster was distinguished. The object coming from “the best” group was regarded as exemplar (a model implementation), while the object extracted from the group having the least average value of success measure was perceived as anti-exemplar (an anti-model implementation). Since the calculation was performed three times, three exemplars and three anti-exemplars were extracted, characterising the most needed and, also, the least desirable attributes of variables describing efforts, effects as well as efforts and effects jointly (in some cases two objects were distinguished, since both were equidistant from the centre).
1045
Towards Identifying the Most Important Attributes of ERP Implementations
The detailed analysis of the attributes of exemplar and anti-exemplar objects, as well as basic statistics calculated for clusters distinguished as the best and the worst, allowed us to draw conclusions as regards to the most needed and the least desired parameters of an implementation project. Thus, the research question can be answered, i.e. the most desired attributed of an ERP project can be elicited. These conclusions could be a suggestion for people responsible for running ERP implementation projects, so that they pay attention to certain facts, which contribute to project success or failure.
rEsEArcH DAtA During the research, 223 enterprises were contacted and 68 (30%) answers were obtained from enterprises representing the whole country and various industries. From among the questionnaires received, 64 were accepted for further analysis. All enterprises investigated in this study represent companies which introduced an ERP system into their organisations. The companies classified by industry type are described as shown in Table 2, where the number of companies belonging to particular industries was provided. As can be easily seen, the vast majority of companies comprise of manufacturing enterprises (75%). For the purpose of analysis, this study adopted the criterion defining enterprise size as the number of employees. The understanding of “small” and “large” companies is derived from the European Community’s definition for small and medium-sized companies (e.g., The Commission of the European Community, 1996). The investigated enterprises differ significantly in their size regarding the number of employees, which can be seen in Table 3. It contains, in subsequent rows, the number of companies (column n) employing a number of workers which falls within a specified range. The largest group is formed by companies employing from 501 to
1046
Table 2. Companies by industry Branch / Industry
n
%
Machinery Manufacturing
12
19%
Food Manufacturing
12
19%
Chemical Products Manufacturing
11
17%
Metal Products Manufacturing
8
13%
Trade
6
9%
Electrical Equipment Manufacturing
5
8%
Power Industry
5
8%
Construction
2
3%
Finance
2
3%
Other
1
2%
1000 workers, and constitutes more than one fifth of the companies researched. The second largest group is made up by the biggest companies employing over 1000 workers, which represents 20% of enterprises evaluated. Certainly, the least numerous group is formed by small companies, employing not more than 100 workers. The implementation projects researched make up quite a diverse group when project duration time is taken into consideration. Among the companies examined, there are projects lasting not more than a couple of months, as well as implementations with a duration time longer than 3 years. Table 4 illustrates the number of projects as regards planned and actual duration time. The examined projects are also diverse as regards to the implementation scope defined by Table 3. Companies by number of workers Number of workers
n
%
20 to 50
3
5%
51 to 100
3
5%
101 to 200
10
16%
201 to 300
10
16%
301 to 500
11
17%
501 to 1000
14
21%
over 1000
13
20%
Towards Identifying the Most Important Attributes of ERP Implementations
Table 5. Projects by implemented modules
Table 4. Projects by duration time Duration time
Number of companies by project duration planned
actual
up to 6 months
11
9
6 to 12 months
19
18
1 to 1,5 year
18
14
1,5 to 2 years
4
9
2 to 3 years
9
7
3 and more years
3
7
the number of installed modules of an ERP system. The following modules were taken into consideration: Finance, Purchasing, Inventory, Sales, Shop Floor Control and MRP Explosion. The last module is treated with special attention, because its implementation is exceptionally difficult and usually requires previous implementation and established use of several key modules of a system. Table 5 contains the number of companies implementing subsequent modules of a system, and Table 6 includes numbers of companies by the total number of modules introduced, with the exception of the module MRP Explosion. Finally, it seems interesting to present the range of implemented ERP packages, which is visible in Table 7. It contains ERP system names and the number of companies adopting a particular package (column #). As can be seen, the projects researched form a very varied collection, introducing 26 various packages. The world’s leader, SAP R/3, is clearly the most popular solution and was introduced in 25% of the projects researched. Then came IFS Applications, then MicroMRP, and MFG/Pro. It is interesting to note that their usage is three times lower than SAP’s. Also, it is interesting that the vast majority of researched companies introduced localised foreign solutions– only 6 firms implemented packages developed and known in Poland (9%).
Module
n
%
Finance
61
95%
Inventory
59
92%
Sales
55
86%
Purchasing
54
84%
Shop Floor Control
37
58%
MRP Explosion
29
45%
Table 6. Projects by number of implemented modules (without MRP Explosion) Number of modules (without MRP Explosion)
n
%
1
4
6%
2
1
2%
3
6
9%
4
23
36%
5
30
47%
Table 7. Number of ERP packages implemented #
ERP Package
17
R/3
7
IFS Apps
6
MMRP (MicroMRP)
5
MFG/Pro
3
Baan IV, Exact, Scala
2
Adaptix*, Digitland Enterprise*, MAX (ICL), Movex, Tetra cs/3
1
ASW, Concorde, Fourth Shift, JDEdwards, Komadres*, Manager II*, Mapics, One World, Oracle Apps, Platinum ERA, Prodis, Promis S/4, System 21, Triton
* package developed in Poland
rEsULts Pictures 1–3 contain distance diagrams achieved by application of the Ward’s method, consecutively using effort variables, effect variables, and all variables. On the basis of diagram analysis, the selected observations were excluded from
1047
Towards Identifying the Most Important Attributes of ERP Implementations
further processing – they were different in each case. The longer the vertical line on the diagram (see Figure 2, Figure 3, and Figure 4), the less the observation is similar to the others. Those which were represented by the longest lines or constituted small two-element groups where excluded. Table 8 contains the list of objects together with the ultimate cardinality of the object sets used in further research. This table also includes the k value determined on the basis of the analysis of object clusters classified for further processing.
The outcome of the division of object collection with the help of k-Means method is presented in Table 9. The clusters are ordered from the largest to the smallest average value of success measure. This means that the group having a smaller identifier contains objects with more desired properties from an implementation efficiency point of view. Along with each object identifier, the distance from the hypothetical centre of an appropriate cluster was placed. The
Figure 2. Distance diagram obtained with the use of Ward’s method applied for 4 effort variables
Figure 3. Distance diagram obtained with the use of Ward’s method applied for 7 effect variables
1048
Towards Identifying the Most Important Attributes of ERP Implementations
Figure 4. Distance diagram obtained with the use of Ward’s method applied for all 11 variables
Table 8. List of objects excluded from further processing on the basis of Ward’s distance diagram analysis (see Figures 2, 3, 4)
Excluded objects
Measurement for 4 effort variables
Measurement for 7 effect variables
A37, A63, A25, A49, A03, A18, A21, A02
A53, A41, A58, A18, A54, A63, A05, A28, A31, A37, A55, A02, A26, A67, A48, A07
Measurement for all 11 variables A53, A63, A22, A05, A58, A18, A34, A39, A51, A35, A37, A29, A41, A03, A20, A28, A54, A14
Number of objects selected for further analysis
56
48
46
Chosen k value
7
8
8
table also contains an average success measure determined for each cluster.
Exemplar and Anti-Exemplar Objects’ characteristics On the basis of the data obtained by employing k-Means method, the exemplar and anti-exemplar objects were chosen. The achieved results were put together in Table 10. The data in Table 10 allow us to draw certain conclusions regarding the characteristics of exemplar and anti-exemplar objects. In all three cases, exemplars are characterised by quite a long implementation time. The whole undertaking is
well planned: actual implementation time is similar to the planned one (although in all cases slightly exceeded), and budget is only insignificantly exceeded. Correspondingly, the planned scope of an implementation was 100 percent completed. Predictably, the level of goals achievement is very high (equal to 3 to 5, where 5 is a maximum value), and the satisfaction level of users is estimated at the level of 4 (maximum 5) in all cases. Furthermore, the great advantage of subjective positive effects indicated by respondents over negative effects demonstrates user satisfaction, and, indirectly, project success. The objects distinguished as anti-exemplars are characterised mainly by a short planned imple-
1049
Towards Identifying the Most Important Attributes of ERP Implementations
Table 9. Hierarchy of clusters obtained with the use of k-Means method, determined on the basis of average success measure Group ID
For 4 effort variables Object identifiers (distance from cluster centre in parentheses)
Average success measure
For 7 effect variables Object identifiers (distance from cluster centre in parentheses)
For all 11 variables Average success measure
Object identifiers (distance from cluster centre in parentheses)
Average success measure
1
A27(.328), A31(.068) A42(.395), A48(.542) A52(.277), A57(.277) A60(.082)
.8339
A01(.209), A03(.134) A35(.166), A43(.172) A44(.122), A56(.200) A57(.217)
.8528
A27(.232), A31(.251) A42(.347), A48(.432) A52(.298), A57(.234) A60(.179)
.8339
2
A22(.000), A35(.117) A39(.000), A51(.117)
.8013
A08(.255), A14(.127) A33(.079), A38(.281) A52(.133), A62(.166)
.7979
A01(.191), A10(.261) A30(.166), A44(.232) A50(.233)
.8059
3
A08(.216), A11(.247) A13(.104), A15(.158) A16(.250), A20(.126) A23(.065), A43(.096) A45(.219), A53(.142) A56(.038), A59(.294) A65(.038), A66(.038) A68(.158)
.7553
A21(.215), A25(.131) A27(.156), A46(.248) A60(.038)
.7950
A08(.303), A13(.220) A16(.252), A23(.301) A43(.282), A49(.285) A56(.162), A65(.203) A66(.153), A68(.201)
.7983
4
A01(.188), A04(.188) A30(.154), A46(.383) A50(.195), A55(.394) A58(.238)
.7536
A10(.294), A11(.230) A19(.265), A20(.285) A40(.224), A47(.254) A50(.167), A51(.142)
.7853
A07(.243), A09(.187) A17(.219), A26(.206)
.7812
5
A05(.282), A07(.101) A09(.142), A10(.225) A14(.303), A19(.142) A26(.142), A29(.318) A44(.033), A64(.396)
.7402
A13(.161), A16(.255) A17(.086), A36(.174) A65(.168), A66(.123)
.7681
A33(.192), A38(.285) A46(.279), A59(.288) A62(.176)
.7679
6
A06(.094), A17(.059) A40(.071), A41(.093), A47(.108)
.6525
A09(.102), A23(.253) A30(.215), A39(.160) A42(.156), A49(.177) A59(.196), A68(.167)
.7671
A02(.272), A04(.432) A21(.286), A25(.310) A55(.228)
.6902
7
A28(.140), A33(.244) A34(.257), A36(.235) A38(.292), A54(.133) A62(.177), A67(.140)
.6363
A04(.396), A06(.222) A22(.331), A29(.249) A34(.310), A64(.286)
.6499
A06(.265), A11(.263) A15(.284), A19(.279) A40(.274), A45(.449) A47(.304), A64(.463)
.6746
A15(.122), A45(.122)
.6327
A36(.112), A67(.112)
.6023
8 Note: Exemplar and anti-exemplar objects are bold
mentation time, together with a relatively large number of implemented modules. In decidedly most cases, the companies did not implement the MRP module; therefore, they could not be treated as the most extensive projects introducing a system in its full functionality. The budget was exceeded, which could suggest that the project was not properly planned. In these companies, the implementation scope was not entirely realised; therefore, the level of goal completion is 1050
estimated to be lower than in the case of positive implementations (below 3 except for 2 cases). The users perceive a somewhat small number of positive and negative effects from the implementation, and, what seems to be interesting, their number is similar, and in 3 cases there were more positive effects than negative ones. It is worth noting that the R/3 package from SAP was implemented in the case of exemplar projects obtained on the basis of both effort vari-
Towards Identifying the Most Important Attributes of ERP Implementations
Table 10. Exemplar and anti-exemplar objects of implementation process ID
Succ
Size
Industry
System
Mod
MRP
PD
AD
Bud
Scope
US
Goal
PE
NE
Effort variables Exemplar
A31
.877
>1000
Antiexemplar
A54
.363
>500
Transport
R/3
4
0
24
30
110
98
4
5
4
0
Food
Concorde
3
0
5
5
200
25
3
2
1
0
110
98
4
5
4
1
Effect variables Exemplar Antiexemplar
A44
.891
>300
Food
R/3
5
1
12
13
A15
.676
>300
Food
R/3
4
0
6
6
110
90
2
2
2
2
A45
.589
>200
Food
Exact
4
0
1
1
100
80
1
1
1
2
A06*
.574
>100
Electrical Eq.
MMRP
5
1
8
18
150
90
3
3
2
3
All variables Exemplar Antiexemplar
A60
.822
>1000
Power
IFS
3
0
24
30
100
100
4
3
4
1
A36
.635
>500
Chemical
Exact
4
0
3
5
130
80
4
2
3
0
A67
.569
>500
Chemical
Scala
4
0
6
6
130
80
3
0
3
0
A11*
.689
>100
Metal
Manager II
5
0
12
12
130
80
3
3
2
2
Symbols: ID – company identifier, Succ – average success measure, Other – as in Table 1 * – chosen from the next to last group, because anti-exemplars A15 and A45 as well as A36 and A67 belonged to two-element groups
ables and effects variables. However, on the other hand, an exemplar project which was distinguished on the basis of all variables introduced the IFS system. The interesting fact is that in the case of effect variables, both exemplar and anti-exemplar projects implemented the R/3 package, and both were from the food industry. Practically all model objects introduced foreign packages, only one “all variables’ anti-exemplar” object implemented Polish software (Manager II). The results show that there is no single ERP package connected with exceptional implementation performance.
Exemplar and Anti-Exemplar clusters’ characteristics In order to verify the observations achieved on the basis of exemplar and anti-exemplar object analysis, for all extreme clusters (containing the best and the worst objects), the basic statistics were estimated (average, minimum and maximum) on the basis of attribute values of objects belonging to particular clusters. The results are presented in
Table 11, where the clusters having the objects with the most desired characteristics are depicted, and in Table 12 containing data regarding clusters with the worst objects. The first table comprises two parts (instead of three) because the analysis performed with the use of effort variables and all variables yielded the same exemplar groups containing the same objects. The analysis of data put together in Tables 10 and 11 leads to certain general conclusions regarding the implementation process in those pattern companies. The objects included in exemplar clusters represent various industries, while the anti-exemplar groups mainly comprise companies from chemical (6 companies out of 12) and food (4 companies) industries. Planned duration time is longer in exemplar clusters than in anti-exemplar groups, while the number of implemented modules is similar or even bigger in anti-exemplar groups. This means that, in the case of anti-exemplar projects, the implementation duration time was estimated too optimistically. The implementation scope is near 100 percent among exemplars
1051
Towards Identifying the Most Important Attributes of ERP Implementations
Table 11. Average, minimum and maximum values of variables estimated for clusters containing objects with the best parameters Succ
Size
Industry
.834
>1000
4 power 1 machinery 1 food 1 transport
System
Mod
MRP*
PD
AD
Bud
Scope
US
Goal
PE
NE
97
3.57
3.9
3.3
0.7
Effort variables and all variables. number of objects = 7 Avg Min
.776
>1000
Max
.877
>1000
3 IFS 3 R/3 1 One World
3.6 2
22.9 0%
5
24.4
106
14
14
100
90
3
3
2
0
36
36
110
100
4
5
4
1
Effect variables. number of objects = 7 Avg
.853
>500
Min
.775
>50
Max
.909
>1000
2 food 1 chemical 1 machinery 1 metal 1 power
2 MMRP 2 R/3 1 Exact 1 IFS
3.9
9.6
9.6
103
93
3.57
4.4
4
1
1
3
4
100
80
3
4
3
1
18
18
110
100
4
5
5
1
57%
5
* % of companies in group implementing MRP module
Table 12. Average, minimum and maximum values of variables estimated for clusters containing objects with the worst parameters Succ
Size
Industry
System
Mod
MRP*
PD
AD
Bud
Scope
US
Goal
PE
NE
Effort variables. number of objects = 8 Avg
.636
>500
Min
.363
>500
Max
.843
>500
4 chemical 2 food 1 machinery 1 power
2 Adaptix 1 Concorde 1 Exact 1 IFS 1 Oracle 1 Scala 1 R/3
3.1 1
8.4
11.4
124
82
3.38
1.8
2.8
0.8
3
5
100
25
3
0
1
0
14
18
200
100
4
4
4
4
0% 4
Effect variables. number of objects = 2 Avg
.633
>200
Min
.589
>200
Max
.676
>300
Avg
.602
>500
Min
.569
>500
Max
.635
>500
2 food
1 Exact 1 R/3
4 4
0%
4
3.5
3.5
105
85
1.5
1.5
1.5
2
1
1
100
80
1
1
1
2
6
6
110
90
2
2
2
2
All variables. number of objects = 2 2 chemical
1 Exact 1 Scala
4 4 4
0%
4.5
5.5
130
80
3.5
1
3
0
3
5
130
80
3
0
3
0
6
6
130
80
4
2
3
0
* % of companies in group implementing MRP module
and is considerably lower among anti-exemplars. User satisfaction is definitely greater in the case of exemplar clusters; furthermore, the apparent advantage of subjective positive effects over negative results was observed. Nevertheless, in the case of anti-exemplar clusters, positive effects were more often recognised than negative out-
1052
comes; however, to a lesser extent than among exemplar projects. The exemplar group extracted on the basis of both effort and all variables consists of the largest companies employing more that 1000 people. This suggests that in the largest enterprises the ERP implementation brings about the best
Towards Identifying the Most Important Attributes of ERP Implementations
results. However, within this group, on average less that 4 system modules were introduced, and none of the projects installed an MRP Explosion module. Hence, these implementations can not be recognised as the most complicated. On the other hand, the most complicated full-scope implementations make up the majority of projects reaching the best effects, i.e. belonging to the exemplar group obtained on the basis of effect variables. Namely, 57 percent of the “best effects” exemplar implementations introduced an MRP Explosion module and the projects of this kind, on average, installed 4 other modules. The exemplar groups contain projects implementing only foreign systems, mainly those most popular among implementations researched, i.e. R/3, IFS, and MMRP. Naturally, this can be partially explained by the frequency of their occurrence. However, the presence of well known foreign packages, which have mature software solutions and implementation methodologies rooted in long-term experience, suggests that the system solution reliability is the deciding factor. On the other hand, practically all exemplar packages were present among anti-exemplar projects or clusters. This suggests that the deciding factor is not only the system itself, but rather the way it is implemented into a particular organisation.
DIscUssION OF FINDINGs Lessons Learned On the basis of the research results, the following observations can be made. •
The company type does not seem to be a factor deciding about ERP implementation success. Companies obtaining the best effects as a consequence of ERP implementation are mainly manufacturing enterprises, though it is difficult to indicate any specific industry that they belong to – they repre-
•
•
•
sent mainly food, but also metal, machinery, and other industries. However, on the other hand, almost all anti-exemplar companies were manufacturing enterprises. The lowest success level in implementing ERP system was reached by companies operating in food and chemical industries, as well as those belonging to the group of medium enterprises as regards to the number of employees. Therefore, the results suggest that company industry can be a significant factor for the project success. Implementations tended to be more successful in large enterprises of 1000 or more employees. On the other hand, implementations ended with failure, i.e. achieved a very low level of success measure, mainly among medium sized enterprises of 500 to 1000 employees. Exceptionally good effects were achieved by companies implementing an ERP system within its full functionality. Moreover, all anti-exemplar clusters include only partial projects. Thus, the integrating aspect of a system is seen when it embraces the company holistically.
The above-mentioned observations raise the issue of ERP system fit, i.e., whether a particular system solution fits a given company, and, also, whether a company really needs such a complicated system. A good illustration of this is the fact that chemical companies were present among the worst performers. However, none of them used the packages renowned for their outstanding performance in chemical industry, like SAP R/3 (Stefanou, 2001). Furthermore, the results suggest that a particular system solution is not a factor determining project success. It turned out that the particular package was present both among outstanding projects and worst performers. Instead, the way of introducing the system seems to play a vital role for project outcome.
1053
Towards Identifying the Most Important Attributes of ERP Implementations
comparison with Prior research This study’s findings, claiming that the company type does not seem to be a deciding factor in ERP implementation success, partially support the findings of Ettlie et al. (2005). Namely, the authors concluded that the strategic predictors of a successful enterprise system deployment do not depend on a company’s industry, which is defined very broadly as the firm’s core activity: manufacturing versus service. This definition of a company’s industry seems equivalent to the understanding of company type employed by this study. Nonetheless, this research results imply that a company’s actual industry plays a crucial role and that practitioners have to pay special attention while implementing ERP system in chemistry and food industries companies. These findings are consistent with the results of Stensrud and Myrtveit (2003), who concluded that there were significant differences in productivity between projects in different industries, and, also, that projects conducted in the process industry were the least efficient. As regards the role of company size in enterprise system adoption, as was already mentioned, there are mixed results presented by various researchers. This study’s findings also contribute to this debate and suggest that results achieved by large companies are greater than by small firms, which supports the findings of some prior research (e.g., Mabert et al., 2003). In particular, they are consistent with the results of Sun et al. (2005) and their claims that small and medium sized companies’ achievement increases to some point, beyond which there is no significant achievement benefit. Nonetheless, we must bear in mind that there are other research works suggesting that the benefits achieved by small and large companies are similar (e.g., Shang & Seddon, 2000). Also, there are research works suggesting that difficulties experienced during enterprise system adoption generally do not differ across company size (Soja & Paliwoda-Pękosz, 2007). However, on the other
1054
hand, Sun et al. (2005) emphasize that for small and medium sized companies people related issues are of paramount importance. The results illustrating the vital need for an adequate amount of time planned for an implementation project are consistent with findings regarding difficulties during enterprise system implementation and impediments to its success. The prior studies recognize mainly organisational problems connected with time over-runs (Kremers & van Dissel, 2000; Soja, 2008; Themistocleous et al., 2001) and the alignment of organisational structure with enterprise system (Kim et al., 2005; Wright & Wright, 2002). This study’s results confirm the findings of Peslak (2006) who perceive time and budget as the major determinants of project success. ES adopting companies should use time wisely and adequately plan education and training so that when the system goes live, users are comfortable with it and understand what they are supposed to be doing, how and why. This should also be ongoing long after the implementation is complete (Tinham, 2006). The second impediment most often recognized by prior research, i.e. the alignment of organisational structure with enterprise system, is illustrated by the issue of system fit and implementation scope, raised by this study outcome. Prior research suggests that the issue of system fit especially concerns smaller companies, who tend to suffer more from the system misfit (e.g., Soja & Paliwoda-Pękosz, 2007). The idea of fit between the system characteristics and the company’s needs is highlighted by Peslak (2006), who advocates that modification to enterprise systems should be minimized since they negatively affect both cost and time performance. The fit between the system features and companies’ needs is also connected with the scope of an implementation project perceived in terms of introduced functionality. This study’s findings illustrate that better results were achieved by companies which implemented greater scope of ERP modules. These results are consistent with
Towards Identifying the Most Important Attributes of ERP Implementations
the findings of Ranganathan and Brown (2006) who discovered that the announcements of ERP adoptions created greater abnormal stock market returns for ERP adoptions with greater functional scope than for those with lesser functional scope. Finally, the outcome of this study is consistent with the research works illustrating that enterprise system implementation should be treated as a business-led project, in contrast to an IT related initiative (Law & Ngai, 2007; Nicolaou, 2004; Tinham, 2006). In particular, this study’s results reveal that the best projects demonstrated the greatest number of declared goals and the highest level of goals’ achievement. This suggests that the best projects among the sample investigated were treated as business-driven endeavours.
Implications for Practitioners Taking the research results into consideration, a series of suggestions for practitioners dealing with ERP projects could be formulated. Making use of these suggestions can have a positive influence on an implementation project course and its final outcome. •
•
The implementation endeavour has to be well planned – the best results were achieved by companies where actual duration time was similar to the planned time; also budget was as planned or only insignificantly exceeded. It is necessary to ensure adequate time for system implementation; haste can be a factor causing problems and having influence on a weak ultimate effect. In the research conducted, projects from the weakest group had an average time planned and usually exceeded this time. Special attention should be paid to partial scope implementations; according to the results obtained, such implementations too often end with failure. This could be connected with an underestimation of the im-
•
•
portance of a project and the lack of care during execution. It is necessary to be careful in the case of implementation projects conducted in the food and chemical industry – the projects in companies representing those two branches most often ended with failure. This also suggests the need for further research on the influence of a company’s industry on the project as a whole. The implementers should pay special attention to the choice of a particular system solution. They have to ensure the proper fit between a system and the adopting organisation.
cONcLUsION The study examines the ERP implementations and, using the statistical methods of elements grouping, extracts the projects with best and worst parameters. The core contribution of this paper is that it illustrates the new method of estimating ERP implementation success factors by employing combined methods of clustering analysis. The study’s results can be useful for practitioners as they suggest some recommendations towards ERP implementation improvement. These suggestions emphasise the need for the proper organisation of the project and the issue of system fit to the particular business environment. Furthermore, this study should benefit the academic community as it shows an innovative method of investigating the issues influencing ERP project outcome. Further research can enhance the process described by introducing more variables capturing project’s effects and efforts, and, also can establish new categories of projects’ estimation, such as implementation efficiency. The main limitation of this study is the sample of respondents. Though the number of research participants is quite substantial (64), further analysis should cover more companies and ensure better distribution of projects. Particularly, this suggestion applies to package
1055
Towards Identifying the Most Important Attributes of ERP Implementations
type, since having comparable samples of adopters of various packages would allow us to better investigate the issues connected with a system fit and its performance. The results also suggest the need for further research on the projects’ conditions depending on a company’s industry.
rEFErENcEs Adam, F., & O’Doherty, P. (2000). Lessons from enterprise resource planning implementation in Ireland – towards smaller and shorter ERP projects. Journal of Information Technology, 15, 305–316. doi:10.1080/02683960010008953 Amoako-Gyampah, K. (2007). Perceived Usefulness, User Involvement and Behavioral Intention: an Empirical Study of ERP Implementation. Computers in Human Behavior, 23, 1232–1248. doi:10.1016/j.chb.2004.12.002 Bernroider, E., & Koch, S. (2001). ERP selection process in midsize and large organizations. Business Process Management Journal, 7(3), 251–257. doi:10.1108/14637150110392746 Boudreau, M., Gefen, D., & Straub, D. (2001). Validation in IS Research: A State-of-the-Art Assessment. MIS Quarterly, 25(1), 1–16. doi:10.2307/3250956 Buonanno, G., Faverio, P., Pigni, F., Ravarini, A., Sciuto, D., & Tagliavini, M. (2005). Factors affecting ERP system adoption: A comparative analysis between SMEs and large companies. Journal of Enterprise Information Management, 18(4), 384–426. doi:10.1108/17410390510609572 Chien, S.-W., & Tsaur, S.-M. (2007). Investigating the Success of ERP Systems: Case Studies in Three Taiwanese High-Tech Industries. Computers in Industry, 58(8-9), 783–793. doi:10.1016/j. compind.2007.02.001
1056
El-Hamdouchi, A., & Willett, E. (1986). Hierarchic Document Classification Using Ward’s Clustering Method. In Proceedings of the 9th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM Press, pp. 149-156. Ettlie, J. E., Perotti, V. J., & Joseph, D. A. (2005). Strategic predictors of successful enterprise system deployment. International Journal of Operations & Production Management, 25(10), 953–972. doi:10.1108/01443570510619473 Everdingen, Y., Hillegersberg, J., & Waarts, E. (2000). ERP adoption by European midsize companies. Communications of the ACM, 43(4), 27–31. doi:10.1145/332051.332064 Everitt, B. S. (1993). Cluster Analysis. Edward Arnold, London. Everitt, B. S., Landau, S., & Leese, M. (2001). Cluster Analysis. London: Edward Arnold Holland, C., Light, B., & Gibson, N. (1999). A Critical Success Factors Model for Enterprise Resource Planning Implementation. In Proceedings of the 7th European Conference on Information Systems ECIS, Copenhagen Business School, Copenhagen, Denmark, 273-287. Kaufman, L., & Rousseeuw, P. J. (1990). Finding Groups in Data: An Introduction to Cluster Analysis. New York: John Wiley & Sons Kim, Y., Lee, Z., & Gosain, S. (2005). Impediments to successful ERP implementation process. Business Process Management Journal, 11(2), 158–170. doi:10.1108/14637150510591156 Ko, D. G., Kisrch, L. J., & King, W. R. (2005). Antecedents of Knowledge Transfer from Consultant to Clients in Enterprise System Implementations. MIS Quarterly, 29(1), 59–85.
Towards Identifying the Most Important Attributes of ERP Implementations
Kositanurit, B., Ngwenyama, O., & Osei-Bryson, K.-M. (2006). An Exploration of Factors that Impact Individual Performance in an ERP Environment: An Analysis Using Multiple Analytical Techniques. European Journal of Information Systems, 15, 556–568. doi:10.1057/palgrave. ejis.3000654 Kremers, M., & van Dissel, H. (2000). ERP System Migrations. Communications of the ACM, 43(4), 53–56. doi:10.1145/332051.332072 Law, C. C. H., & Ngai, E. W. T. (2007). ERP Systems Adoption: An Exploratory Study of the Organizational Factors and Impacts of ERP Success. Information & Management, 44, 418–432. doi:10.1016/j.im.2007.03.004 Loh, T. C., & Koh, S. C. L. (2004). Critical elements for a successful enterprise resource planning implementation in small- and medium-sized enterprises. International Journal of Production Research, 42(17), 3433–3455. doi:10.1080/0020 7540410001671679 Lyytinen, K. (1988). Expectation failure concept and systems analysts view of information systems failures: Results of an exploratory study. Information & Management, 14(1), 45–56. doi:10.1016/0378-7206(88)90066-3 Mabert, V. A., Soni, A., & Venkataramanan, M. A. (2003). The impact of organization size on enterprise resource planning (ERP) implementations in the US manufacturing sector. International Journal of Management Science, 31, 235–246. McGinnis, T. C., & Huang, Z. (2007). Rethinking ERP Success: A New Perspective from Knowledge Management and Continuous Improvement. Information & Management, 44(7), 626–634. doi:10.1016/j.im.2007.05.006 McNurlin, B. C., & Sprague, R. H., Jr. (2002). Information Systems Management in Practice. 5th edition, Upper Saddle River.
Muscatello, J. R., Small, M. H., & Chen, I. J. (2003). Implementing ERP in small and midsize manufacturing firms. International Journal of Operations & Production Management, 23, 850–871. doi:10.1108/01443570310486329 National Statistics 2001 area classification (2001). Area classification for statistical wards. http:// www.statistics.gov.uk/about/methodology_by_ theme/area_classification/wards/downloads/ area_classification_for_statistical_wards_methods.pdf, retrieved 2006-01-20. Nicolaou, A. I. (2004). ERP Systems Implementation: Drivers of Post-Implementation Success, Decision Support in an Uncertain and Complex World, The IFIP TC8/WG8.3 International Conference, 589-597. Parr, A., & Shanks, G. (2000). A Taxonomy of ERP Implementation Approaches. In Proceedings of the 33rd Hawaii International Conference on System Sciences HICSS, Maui, Hawaii, USA, 2424-2433. Peslak, A. R. (2006). Enterprise Resource Planning Success. An Exploratory Study of the Financial Executive Perspective. Industrial Management & Data Systems, 106(9), 1288–1303. doi:10.1108/02635570610712582 Ranganathan, C., & Brown, C. V. (2006). ERP Investments and the Market Value of Firms. Information Systems Research, 17(2), 145–161. doi:10.1287/isre.1060.0084 Raymond, L., & Uwizeyemungu, S. (2007). A profile of ERP adoption in manufacturing SMEs. Journal of Enterprise Information Management, 20(4), 487–502. doi:10.1108/17410390710772731 Sarkis, J., & Gunasekaran, A. (2003). Enterprise resource planning – modeling and analysis . European Journal of Operational Research, 146, 229–232. doi:10.1016/S0377-2217(02)00545-3
1057
Towards Identifying the Most Important Attributes of ERP Implementations
Schniederjans, M. J., & Kim, G. C. (2003). Implementing Enterprise Resource Planning Systems with Total Quality Control and Business Process Reengineering . International Journal of Operations & Production Management, 23(4), 418–429. doi:10.1108/01443570310467339
Soja, P. (2008). Difficulties in Enterprise System Implementation in Emerging Economies: Insights from an Exploratory Field Study in Poland. Information Technology for Development. Special Issue on Information Technology Investments in Emerging Economies, 14(1), 31–51.
Sedera, D., Gable, G., & Chan, T. (2003). ERP Success: Does Organization Size Matter? In Proceedings of the Pacific Asia Conference on Information Systems (PACIS), 10–13 July, Adelaide, South Australia, 1075-1088.
Soja, P., & Paliwoda-Pękosz, G. (2007). Towards the Causal Structure of Problems in Enterprise System Adoption. In Proceedings of the 13th Americas Conference on Information Systems, Keystone/Colorado, USA.
Shang, S., & Seddon, P. B. (2000). A Comprehensive Framework for Classifying Benefits of ERP Systems. In Proceedings of the 6th Americas Conference on Information Systems, Long Beach, CA, USA, 1005-1014.
Stefanou, C. J. (2001). A framework for the ex-ante evaluation of ERP software. European Journal of Information Systems, 10(4), 204–215. doi:10.1057/palgrave.ejis.3000407
Sharma, A., & Vyas, P. (2007). DSS (Decision Support Systems) in Indian Organised Retail Sector, Indian Institute Of Management. Soh, C., & Sia, S. K. (2005). The challenges of implementing “vanilla” version of enterprise systems. MIS Quarterly Executive, 4(3), 373–384. Soja, P. (2004). Success Factors in ERP Systems Implementations. Result of research on the Polish ERP market. In Proceedings of the 10th Americas Conference on Information Systems AMCIS, New York, USA, 3914-3922. Soja, P. (2005). The Impact of ERP Implementation on the Enterprise – an Empirical Study. In Proceedings of the 8th International Conference on Business Information Systems, Poznan, Poland, 389-402. Soja, P. (2006). Success factors in ERP systems implementations: lessons from practice. Journal of Enterprise Information Management, 19(4), 418–433. doi:10.1108/17410390610678331
1058
Stensrud, E., & Myrtveit, I. (2003). Identifying High Performance ERP Projects. IEEE Transactions on Software Engineering, 29(5), 398–416. doi:10.1109/TSE.2003.1199070 Sun, A. Y. T., Yazdani, A., & Overend, J. D. (2005). Achievement Assessment for Enterprise Resource Planning (ERP) System Implementations Based on Critical Success Factors (CSFs). International Journal of Production Economics, 98, 189–203. doi:10.1016/j.ijpe.2004.05.013 The Commission of the European Community. (1996). (96/280/EC) Commission recommendation of 3 April 1996 concerning the definition of small and medium-sized enterprises. In Official Journal No. L 107 30/04/1996, pp.4-9. Themistocleous, M., Irani, Z., O’Keefe, R. M., & Paul, R. (2001). ERP Problems and Application Integration Issues: An Empirical Survey. In Proceedings of the 34th Hawaii International Conference on System Sciences. Tinham, B. (2006). Your Guide to Choosing and Implementing ERP. Manufacturing Computer Solutions.
Towards Identifying the Most Important Attributes of ERP Implementations
Wright, S., & Wright, A. M. (2002). Information System Assurance for Enterprise Resource Planning Systems: Unique Risk Considerations. Journal of Information Systems, 16(Supplement), 99–113. doi:10.2308/jis.2002.16.s-1.99
Wu, J.-H., & Wang, Y.-M. (2007). Measuring ERP Success: The Key-Users’ Viewpoint of the ERP to Produce a Viable IS in the Organization. Computers in Human Behavior, 23, 1582–1596. doi:10.1016/j.chb.2005.07.005
This work was previously published in Global Implications of Modern Enterprise Information Systems: Technologies and Applications, edited by Angappa Gunasekaran, pp. 114-136, copyright 2009 by Information Science Reference (an imprint of IGI Global).
1059
1060
Chapter 4.12
Challenges and Solutions for Complex Business Process Management Minhong Wang The University of Hong Kong, Hong Kong Kuldeep Kumar Florida International University, USA
AbstrAct A business process displays complexity as a result of multiple interactions of its internal components and interaction between the process and its environment. To manage complexity and foster flexibility of business process management (BPM), we present the DCAR architecture for developing complex BPM systems, which includes decomposition of complex processes (D); coordination of interactive activities (C); awareness of dynamic environments (A); and resource selection and coordination (R). On the other hand, computing technologies, such as object-oriented programming, component-based development, agent-oriented computing, and serviceDOI: 10.4018/978-1-60566-669-3.ch001
oriented architecture have been applied in modeling and developing complex systems. However, there is considerable ambiguity involved in differentiating between these overlapping technologies and their use in developing BPM systems. No explicit linkage has been established between the requirement of complex BPM and the supporting technologies. In this study, we use the DCAR architecture as the foundation to identify the BPM requirements for employing technologies in developing BPM systems. Based on an examination of the both sides (BPM requirements and supporting technologies), we present a clear picture of business process complexity with a systemic approach for developing complex BPM systems by using appropriate computing technologies.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Challenges and Solutions for Complex Business Process Management
INtrODUctION Businesses around the world are paying more attention to process management and process automation to improve organizational efficiency and effectiveness. It is increasingly common to describe organizations as sets of business processes that can be improved by business process management (BPM). Most approaches to BPM have used information technologies to support or automate business processes, in whole or in part, by providing computer-based systems support. These technology-based systems help coordinate and streamline business transactions, reduce operational costs, and promote real-time visibility in business performance. Traditional approaches to building and implementing BPM systems use workflow technologies to design and control the business process. Workflow-based systems follow highly structured and predefined workflow models, and are well suited to applications with stable inputs, processes, and outputs. Contemporary business processes are becoming increasingly complex and dynamic as they seek to cope with a wide range of internal and external interactions and changes. To provide sufficient flexibility and adaptability in BPM, a number of researchers have been investigating the approaches and techniques for developing BPM systems for an increasingly turbulent environment (Casati et al., 1999; Chiu et al., 1999; Weske, 2001; Wang et al., 2002, 2005a; K. Kumar et al., 2006). Most studies have focused on present process structures and provide rapid response to changes that lead to temporary and short term fluctuations in the organization’s activities. In this study, we view business process as a complex system that adapts to continuously changing and unpredictable environments in order to survive. A business process displays complexity because of multiple interactions of its internal components and interaction between the process and its environment. To manage complexity and
foster flexibility of complex systems, modularity is the key to the solution (Baldwin et al., 1997; Simon, 1981). Modularity in BPM requires decomposing a complex BPM system into a number of interacting components that perform the processes. Based on the investigation of business process complexity and modularity theory, we present the DCAR architecture for developing complex BPM systems, which include decomposition of complex processes (D); coordination of interactive activities (C); awareness of dynamic environments (A); and resource selection and coordination (R). On the other hand, various modular computing technologies, such as Object-Oriented Programming (OOP); Component-Based Development (CBD); Agent-Oriented Computing (AOC); and Service-Oriented Architecture (SOA); have emerged to model and develop complex systems. There has been a proliferation of studies about the application of these modular technologies in developing BPM systems (Weske, 1998; Kammer et al., 2000; Jennings et al., 2002; Wang et al., 2005b; Leymann et al., 2002). As the modular computing paradigms and technologies become popular, researchers often attempt to employ and integrate them in creating business process management solutions. However, there is considerable ambiguity involved in differentiating between these overlapping terminologies and consequently their use for BPM systems development. The fundamental questions about the use of these technologies, i.e., why we need to use them for solutions of BPM, how we apply them, and how we integrate them with other solutions, remain unexamined. Most research on technology support for BPM is experience-driven, ad-hoc, and often lacks a systematic analysis of the rationale for the technology support (Wang et al., 2008a). Little work has examined the root of complexity of business processes, the need for effective approaches to BPM, and how this need affects the technology solutions for process management (K. Kumar et al., 2006). In this study, we analyze
1061
Challenges and Solutions for Complex Business Process Management
the differences and relationships between these overlapping terminologies and techniques, and match them to BPM requirements. The DCAR architecture we proposed for complex BPM is used as the foundation to identify the BPM requirements for employing these modular computing technologies in developing BPM systems. Based on an examination of both sides (BPM requirements and supporting technologies), we present a clear picture of business process complexity with a systemic approach on how these technologies can be applied and integrated in developing systems for complex process management.
rEsEArcH bAcKGrOUND business Process A business process can be simply defined as a collection of activities that create value by transforming inputs into more valuable outputs (Hammer et al., 1993). These activities consist of a series of steps performed by actors to produce a product or service for the customer. In more details, a business process is typically a coordinated and logically sequenced set of work activities and associated resources that produce something of value to a customer (El Sawy, 2001). Each process has an identified customer; it is initiated by a process trigger or a business event (usually a request for product or service arriving from the process customer); and it produces a process outcome (the product or a service requested by the customer) as its deliverable to the process customer. Given the scope and variety of actions that are needed to produce the product or service, the process is differentiated (sub-divided) into a set of tasks or activities. These tasks are assigned to and performed by actors (either machines or humans), using resources such as work-stations; machines; raw materials; and supplies that are available to the actors.
1062
business Process Management Business Process Management (BPM) refers to activities performed by organizations to design (capture processes and document their design in terms of process maps), model (define business processes in a computer language), execute (develop software that enables the process), monitor (track individual processes for performance measurement), and optimize (retrieve process performance for improvement) operational business processes by using a combination of models, methods, techniques, and tools (van der Aalst et al., 2002; Melão et al., 2000). Process design in turn includes the differentiation or subdivision of the process into underlying tasks or activities, processconfiguration (arranging the process tasks into a logical sequence to produce the process outcome), and selection and allocation of specific actors and resources to particular process-tasks. The monitoring and control phases can be an extension to provide feedback into the continuing design, implementation, and execution cycle. Among various approaches or methods for BPM, this study focused on the use of information technologies to support or automate business processes, in whole or in part, by providing computer-based systems support. These technology-based systems help coordinate and streamline business transactions, reduce operational costs, and promote real-time visibility in business performance. Traditional approaches to building and implementing BPM systems use workflow technologies to design and control the business process (van der Aalst et al., 2002). Workflow-based systems follow highly structured and predefined workflow models, and are well suited to applications with stable inputs, processes, and outputs. Contemporary business processes are complex and dynamic. They evolve and change over time as a result of complex interactions, resource competition, breakdowns and abnormal events, and other sources of uncertainty. Realizing the need
Challenges and Solutions for Complex Business Process Management
to provide sufficient flexibility and adaptability in BPM, many researchers are investigating the approaches and techniques for developing BPM systems for an increasingly turbulent environment. In perusing this, most studies have focused on the capabilities that are based on present process structures of an organization and provide rapid response to changes that lead to temporary and short term fluctuations in the organization’s activities. Efforts to improve flexibility in business processes can be found in numerous studies on adaptive workflow/process modeling, workflow/ process monitoring, and exception management (Casati et al., 1999; Chiu et al., 1999; Weske, 2001; Wang et al., 2002, 2005a; K. Kumar et al., 2006). On the other hand, while faced with revolutionary changes, an organization needs more flexibility to rearrange its internal or intra-organizational structures and processes. Efforts can be found in research on business process analysis and redesign, and business process reconstruction or reengineering, which enable new forms of working and collaborating within an organization or in cross-border businesses (Hammer et al., 1993). With the recent growth of electronic services, a business process can be dynamically established by connecting or composing appropriate services based on market demand (A. Kumar et al., 2002; Petrie et al., 2003; K. Kumar et al., 2007). A worldwide network of organizations can be formed through process composition over the Internet (Casati et al., 2001).
cHALLENGEs OF bUsINEss PrOcEss MANAGEMENt business requirement To business, dealing with changes is a fact of everyday life that must be exploited. Real-world processes are often much messier than what the typical input-transformation-output view sug-
gests. Business processes are best viewed as networks, in which a number of actors collaborate and interact to achieve a business goal. A business process displays complexity because of multiple interactions of its internal components and interaction between the process and its environment (Melão et al., 2000). In recent years, business environments have been changing from centralized-and-closed to distributed-and-open. Business processes are becoming increasingly complex and dynamic as they seek to cope with a wide range of internal and external interactions and changes. In this situation, BPM should be able to manage a number of components and their complex interactions in business processes, in particular in continuously changing and interplayed environment (Wang et al., 2006a; K. Kumar et al., 2006). Furthermore, attention should be paid to situations where dynamic collaboration and softconnection between business partners is playing an increasingly important role (K. Kumar, 2001; Wang et al., 2008b). This needs both information technologies and managerial capabilities to adapt the organization structure and its decisionmaking and communication processes in order to facilitate cross-hierarchical, cross-functional, cross-product/service, and cross-market capability development. In this context, we need to shift from the mechanistic view of workflow paradigm which focuses on static and structure features of business process. Instead, business processes can be viewed as complex dynamic systems that adapt to continuously changing and unpredictable environments in order to survive. This requires the integration of organizational, managerial, and technological issues in understanding and managing business processes. In addition to a set of logically related tasks, more aspects need to be taken into account such as environment awareness; knowledge for process management; flexible resource coordination; and so on.
1063
Challenges and Solutions for Complex Business Process Management
technology support To deal with a complex system like business process management, software technologies have been on constant evolution. Structured or function-oriented analysis was used in 1970s for functional decomposition of more stable systems, concealing the details of an algorithm in a function. To deal with constant changes related to data structure, object-oriented programming (OOP) was addressed to separate data from the applications, where data and its corresponding operations were encapsulated within an object. Based on this method, traditional workflow systems have been developed by taking processes or workflows out of the applications to improve the control and change of business processes (Weske, 1998). More recently, to deal with more complex and frequent changes of business processes, new computing technologies have emerged in BPM, such as Component-Based Development (CBD) in Kammer et al. (2000); Agent-Oriented Computing (AOC) in Jennings et al. (2002) and Wang et al. (2005b); and Service-Oriented Architecture (SOA) in Leymann et al. (2002). However, the fundamental questions about the use of these technologies, i.e., why we need to use them for solutions of BPM, how we apply them, and how we integrate them with other solutions, remain unexamined. Most research on technology support for BPM is experience-driven, ad-hoc, and often lacks a systematic analysis of the rationale for the technology support (Wang et al., 2008a). There is only minimal work that examines the root of complexity of business processes, the need for effective approaches to BPM, and how this need affects the technology solutions for process management (K. Kumar et al., 2006).
1064
MODULArIZAtION FOr cOMPLEX systEMs As discussed, business processes display complexity due to interactions of their internal components and interaction of the process with its environment. Business processes are complex systems that are made up of a number of interacting objects with dynamic behavior. Alexander and Simon addressed the theories about how we design complex systems; to design a complex structure, one powerful technique is to decompose it into semi-independent and interrelated components, which in turn have their own components in (Simon, 1981). Though they did not use the word “modularity”, the concept was central to their thinking. Baldwin and Clark (1997) addressed modularity as a particular design structure, which refers to development of a complex product or process from smaller subsystems that can be designed independently. It is possible for us to view all entities – social, biological, technological, or otherwise – as hierarchically nested system. Modularity is the key to managing complexity and fostering flexibility of complex systems (Baldwin et al., 1997; Simon, 1981). It ensures easy maintenance and updates of complex systems by reducing the interactions among the components. A complex system can be managed by separating the high-frequency intra-module linkages from the low-frequency inter-module linkages, and limiting the scope of interactions between modules by hiding the intra-module relations inside a module box. A module in a complex system is a unit whose structural elements are powerfully connected among themselves and relatively weakly connected to elements in other units. The guiding principle for decomposition is that intra-module cohesion is generally stronger than inter-module coupling. This fact can be used to distinguish diverse interactions, and deal with them by encapsulating the intra-module interactions inside a module box (Wolters, 2002). Using modularization, a complex
Challenges and Solutions for Complex Business Process Management
structure can be decomposed into sub-functions, sub-processes, sub-areas, and in other ways. Alternative decompositions correspond to different ways of dividing the responsibilities. While pursuing modularization, choosing right granularity of the component is an important issue in decomposing complex systems. The identity of any unit as the module is not fixed, but determined by the level of analysis we choose (Schilling, 2003). Granularity is the size of the unit under consideration. It refers to the degree to which a system can be separated and reorganized. For example, a company is generally divided into departments, the departments into groups, and the groups into employees. Systems of large components are called coarse-grained, and systems of small components are called fine-grained. Granularity is a relative concept that can be precisely defined only in a specific context (Elfatatry, 2007). For instance, if a component implements all functions of a banking system, then it can be considered coarse grained; if it supports just credit-balance checking, it is considered fine-grained. The greater the granularity, the more the flexibility, but the greater the overheads of synchronization and communication. To maximize the performance of modularization, a complex system must be analyzed and implemented by choosing right granularity of the components. A tradeoff between modularity and integrity should be considered to ensure overall system performance (Garud et al., 2003). With respect to business process management, modularity supports two things. First, modularity increases the range of “manageable” complexity. It does this by limiting the scope of interactions between various components by encapsulating some components (such as data, resources, knowledge, and responsibility of a specific task) and their interactions inside the task module, thereby reducing the amount and range of interweaving that occurs in an interconnected process. Second, modularity accommodates uncertainty. Through modularity, components are partitioned into those
that are visible and those that are hidden. Hidden components lying inside a black box (the task module) are isolated from other parts and are allowed to vary without changing much in other parts. This provides flexible process structures by permitting uncertainty as well as accommodating changes.
DcAr ArcHItEctUrE FOr cOMPLEX PrOcEss MANAGEMENt Based on the above investigation of BPM challenges (3rd section) and the modularity theory for managing complexity (4th section), we propose the DCAR architecture for developing BPM systems, which includes decomposition of complex processes (D); coordination of interactive activities (C); awareness of dynamic environments (A); and resource selection and coordination (R).
Decomposition of complex Processes (D) According to the modular architecture, a business process can be decomposed into tasks, task into sub-tasks, and so on. Interactions between sub-tasks within a task are often encapsulated within the task; interactions between tasks are encapsulated within their higher-level process or task. This raises the issue on how we decompose complex processes. Traditional workflow approaches have selected “task” as the basic module for building process management systems. A business process can be decomposed into a number of semi-dependent and interrelated tasks. These tasks are then linked to each other in a pre-established and usually sequential inter-relationship or dependency. With the extension of business processes from intra-organizational to inter-organizational scope, we need to deal with interactions within an organization as well as interactions across organizations. Moreover, the complexity of BPM is increased by interweaving of inter- and intra-organizational interactions.
1065
Challenges and Solutions for Complex Business Process Management
To manage the complexity, we need to distinguish between inter- and intra-organizational interactions and deal with them by isolating one type from another. We propose “service” as a high level view of the building block of a process, where a process is composed of a set of services. Each service is provided by a corresponding actor (organization, individual, or computer program), and can be further decomposed into sub-tasks. For example, a complex supply chain management process can be decomposed into “customer order service”, “procurement service”, “manufacturing service”, and “transportation service”; each service can be provided by a corresponding organizational actor, and can be further decomposed throughout several layers as shown in Figure 1.
coordination of Interactive Activities (c) To manage complex interactions in complex processes, multiple actors, activities, resources, and goals need to be coordinated. Every organized human activity gives rise to two fundamental Figure 1. Decomposition of a supply chain process
1066
requirements: 1) differentiation, or the division of work into tasks to be performed by various actors; and 2) integration, that is, the coordination of these tasks to accomplish the goals of the activity (Mintzberg, 1979). After decomposing a complex process into a number of task components, we need to coordinate various interactions between the components at different levels in a network hierarchy. In the context of a hierarchy, a component can be involved in vertical interactions with its subordinates and super-ordinates, and in horizontal interactions with its peers. A component in a complex system, no matter how large or small, may interact with a limited set of superiors, inferiors and coordinate peers (Simon, 1981). In a complex system, the components interact, represented as coupling or dependency between the components. Based on degree of strength of dependency, the components in a complex system are loosely or tightly coupled (Simon, 1977). Based on the relationship between the components in their interactions, the components in a complex system can be centrally or de-centrally coordinated.
Challenges and Solutions for Complex Business Process Management
Take the example of supply chain management in Figure 1. In a loosely coupled context of “supply chain management”, the participating components include “customer order service”, “procurement service”, “manufacturing service”, and “transportation service”. Each service is regarded as an autonomous entity, managing its own activities as well as the interaction with other services. Decentralized coordination or mutual negotiation can be suggested to govern the supply chain if no formal centralized governance authority exists. In a tightly coupled context of “order process”, the participating components including “customer login”, “order input”, “order confirmation”, and “order submission” are highly interdependent and mixed, and bundled into a single integrated package. Centralized coordination can be used if there is a central authority granted the power to govern the order process. Mintzberg further suggests that environmental uncertainty is an important determinant of the mode for interactions and coordination. The more stable and predictable the situation, the greater the reliance on coordination based on structured and specifiable schedules, such as coordination by plan and coordination by standardization. The more variable and unpredictable the situation, the greater will be the reliance on informal and flexible communication, such as coordination by feedback and coordination by mutual adjustment (K. Kumar et al., 1996, 2007). When faced with increased uncertainties in dynamic environments, organizations need to use more flexible coordination mechanisms to coordinate their business activities. Flexible coordination is portrayed by more bottom-up initiatives and less centralization of decision-making in the top. This requires flatter hierarchies, decentralized autonomy-based units, and decision-based coordination, which in turn reduces direct hierarchical control and encourages greater mutual adjustment and coordination between the work-units (Mintzberg, 1979; Volberda, 1999).
Awareness of turbulent business Environment (A) As a result of complex interactions, resource competition, abnormal events, and other sources of uncertainty, business processes continuously evolve and change over time. A complex process is usually semi-structured or unstructured to the extent that there is an absence of routine procedures for dealing with it. In such situations, BPM solutions do not depend on providing the computer system with exact details about how to accomplish a process; but provide the system with guidelines to help it determine how to deal with the process. In other words, problem solving is regarded as an interaction between the behaving organism and the environment under the guidance of a control system. Information and data are input to this system, represented in its memory as declarative knowledge, and then used in problem solving following algorithmic or heuristic steps (Wang et al., 2006a). A complex system interacts with and adapts to its environment for survival, in addition to interactions of internal components. Any adaptive system must develop correlations between goals and actions in the world of process. This requires continuous translation between the state and process and discovery of a sequence of processes that will produce the goal state from an initial state based on means-ends analysis (Holland, 1995; Simon, 2003). A basic idea underlying is the control of complex dynamic systems or situations based on situation awareness. Awareness, according to biological psychology, is a human’s or an animal’s perception and cognitive reaction to a condition or event. Situation awareness is the perception and understanding of objects, events, people, system states, interactions, environmental conditions, and other situation-specific factors in complex and dynamic environments (Endsley, 1995). Situation awareness underpins real-time reactions to environmental changes. In terms of
1067
Challenges and Solutions for Complex Business Process Management
cognitive psychology, situation awareness refers to the active content of a decision-maker’s mental model; its purpose is to enable rapid and appropriate decisions and effective actions. In a dynamic business process environment, an exact execution order of activities is impractical; the interaction or relationship between the environment and activities is more appropriate in determining how to manage and coordinate activities (Wang et al., 2006a). The dynamicism therefore requires spontaneous decisions and coordination of processes based on situation awareness. In this context, BPM should be able to coordinate the processes by sensing and comprehending the situation, determining responses to it, while at the same time, taking actions to work towards business goals. In other words, the question of which task to execute and when to execute it is dependent on the current environment and underlying business rules rather than a static process schema. To achieve this, knowledge or rules for process management has become important foundation for BPM in dynamic environment. The modular system architecture can greatly improve the ability to identify and leverage knowledge for coordination in business processes (Sanchez et al., 2003). Through modularization, the knowledge becomes embodied in specific BPM components, which helps an organization to discover and focus on opportunities for organizational learning and capability development in BPM.
resource selection and coordination (r) Business processes require actors and resources to perform the tasks. These actors and their associated resources may reside either within the organization, or in the case of inter-organizational processes, across a network of multiple organizations. In some situations, more than one actor may be qualified and available for performing a task; while in some others, more than one task requires the same actor and resource. Resource
1068
management is an important and complicated issue to the efficiency and effectiveness of business processes or workflows. In addition to task, structure, and procedure, the resource aspects of business process should be taken into account. However the traditional process model or definition does not include the resource concept. Though some common principles for resource allocation in workflow management (e.g., first-in first out, shortest processing time, and earliest due date) have been recommended (van der Aalst et al., 2002), the research on resource management, in particular on process flexibility through resource selection and coordination in business processes is far from sufficiency. In today’s business environment, business networks of resources and actors can be temporarily assembled, integrated, and driven by demands that emerge and operate for the lifespan of the market opportunity (K. Kumar, 2001). In this conception, a firm is not considered as a black box guided by the strategist, but as a bundle of firm-specific resources of use for specific tasks. Along with this, new business models have accordingly come into view, such as demand chain; virtual enterprise; and electronic marketplace. They allow companies to operate in dynamically changing environments by quickly and accurately evaluating new market opportunities or new products. The companies may coordinate with potential partners in demanddriven and resource-based soft connections that are made for the duration of the market opportunity. As a result, a business process can be dynamically established at run-time by connecting or composing several services together from different organizations through alliances, partnerships, or joint ventures. In this situation, attention on business processes should go beyond task and procedure, and extend to other elements, which include resources discovery; selection; integration; and coordination. What is new in this business process model is reliance on the idea of separating resource requirements from concrete satisfiers (Mowshowitz,
Challenges and Solutions for Complex Business Process Management
1997). This separation allows for crafting process structures that enable switching between different resources options for implementing a process, as shown in Figure 2. It creates an environment in which the means for reaching a goal are evaluated and selected for optimized performance. The success of the model is highly dependent on the match between the requirements and satisfiers that deliver the services. One way to ensure this balance is to model the integration or composition of business processes as a management problem which involves: 1) the separation of requirements from the means for realization, and 2) the dynamic selection and allocation of available resources to requirements.
INVEstIGAtION OF MODULAr cOMPUtING tEcHNOLOGIEs There has been a proliferation of studies about the application of agent, service, component, and
object oriented computing technologies to BPM (Jennings et al., 2002; Wang et al., 2005b; Leymann et al., 2002; Kammer et al., 2000; Weske, 1998). The aspiration of these modular computing technologies have facilitated the modeling of process architecture by modular software architectures (object, component, Web service, and software agent), thereby creating analogs of the business process in software. Each paradigm represents a philosophy of perception, abstraction, and decomposition of complex systems in order to deal with changes. As the modular computing paradigms and technologies become popular, researchers often attempt to employ and integrate them in creating business process management solutions. However, different modular technologies have different philosophical beliefs with respect to how a complex system should be decomposed in order to tackle changes (Elfatatry, 2007). These technologies are usually applied in BPM without identifying the real rationale for their use in BPM
Figure 2. Resource selection and coordination in business processes
1069
Challenges and Solutions for Complex Business Process Management
scenarios. At present there is considerable ambiguity in differentiating between these overlapping terminologies and consequently their use for BPM systems development. For example, we often hear people discussing their proposed solutions as agent-based systems whereas they may just be simply using object abstraction. Furthermore, the common-sense understanding of these concepts and technologies does not easily map onto each other. Unless we have clarity on these terminologies and the way how to use them, the application and integration of these techniques is likely be problematic. In this section we outline the four concepts: Agents and Agent-Oriented Computing; Services and Services-Oriented Architecture; Objects and Object-Oriented Programming; and Components and Component-Based Development. The purpose is not to re-define these concepts and approaches, but to clarify their similarities and differences, in particular towards their abstraction and decomposition techniques, and to make explicit some of the underlying assumptions inherent in the use of the terminology.
Object-Oriented Programming (OOP) Object-Oriented Programming (OOP) is a software engineering paradigm that uses “objects” and their interactions to design applications and computer programs (Rumbaugh, 1991). A program is seen as a collection of cooperating objects, as opposed to a traditional view in which a program is seen as a list of instructions to the computer. The OOP paradigm addressed issues of reuse and maintenance by encapsulating data and its corresponding operations within an object class. To change a data structure, it is often necessary to change all the functions related to the data structure. OOP was deployed as an attempt to promote greater flexibility and maintainability, since the concepts of objects in the problem domain has a higher chance of being stable than functions and data structures.
1070
component-based Development (cbD) Component-Based Development (CBD) is another branch of the software engineering discipline, with an emphasis on decomposition of the engineered systems into functional or logical components with well-defined interfaces used for communication across the components. CBD includes a component model and an interface model. The component model specifies for each component how the component behaves in an arbitrary environment, and the interface model specifies for each component how the component interacts with its environment (Szyperski, 2002). OOP vs. CBD. A component is a small group of objects working together to provide a system function. It can be viewed as a black box at the level of a large system function. At a fine level of granularity, we use objects to hide behavior and data. At a coarser level of granularity, we use components to do the same. OOP focuses on the encapsulation of both data and behavior of an object. CBD goes further by supporting public interfaces used for communication across the components. Object-Oriented (OO) technologies provide rich models to describe problem domains, however it is not enough to adapt to changing requirements of real-world software systems (Elfatatry, 2007). OOP assumes that it is possible to identify and solve almost all problems before coding, while CBD and later SOA and AOC adopt a more pragmatic approach that believes business system development is an incremental process and changes are an inescapable aspect of software design. Specifically, objects were too fine-grained and did not make a clear separation between computational and compositional aspects. Components were then proposed to encapsulate the computational details of a set of objects. Using CBD, software development can be improved since applications can be quickly assembled from a large collection of prefabricated and interoperable software components.
Challenges and Solutions for Complex Business Process Management
Components inherit much of the characteristics of objects in the OO paradigm. But the component notion goes further by separating the interface from the component model. OO reuse usually means reuse of class libraries in a particular OO programming language or environment. For example, to be able to reuse a SmallTalk or Java class in OO, you have to be conversant with SmallTalk or Java language. A component, by using public interface, can be reused without even knowing which programming language or platform it uses internally.
service-Oriented Architecture (sOA) A service is defined as an act or performance that one party can offer to another that is essentially intangible and does not result in the ownership of anything. Its production may or may not be tied to a physical product. A Web Service, defined by W3C (World Wide Web Consortium), a software application identified by a URI (Uniform Resource Identifier), whose interfaces and bindings are capable of being defined, described, and discovered by XML, and which supports direct interactions with other software applications using XML-based messages via Internet-based protocols. Web services are self-contained and modular business applications based on open standards (Leymann et al., 2002). They can share information using standardized communication protocols and ask each other to do something, i.e., ask for service. Service-Oriented Architecture (SOA) utilizes Web services as fundamental elements for developing applications. It is an emerging paradigm for architecting and implementing business collaborations within and across organizational boundaries. SOA enables seamless and flexible integration of Web services or applications over the Internet. It supports universal interoperability and location transparency. SOA reduces complexity of business applications in large-scale and open environments
by providing flexibility through service-based abstraction of organizational applications. SOA vs. CBD. Compared with a component, a service is relatively coarse-grained which should able to encapsulate more details. SOA is an extension of earlier OOP and CBD concepts. CBD supports more close system architecture where the exact source of the required functionality and communication is predetermined. Propriety standards and implementation dependent specification of components have hindered CBD from achieving its primary goal facilitating reuse. The point in SOA is the service specification rather than the implementation. SOA focuses on the user’s view of a computing object or application, i.e., the services that are provided and the metadata that define how the services behave. In SOA, a service has a published network-addressable interface. A published interface is exposed to the network and may not be changed so easily, because the clients of the published interface are not known. The difference between component and service in the interface is analogous to an intranet-based site only accessible by employees of the company and an Internet site accessible by anyone. A service does not define any structural constraints for loose coupling of services over the Internet. CBD architectures, on the other hand, represent a case of tight coupling. For example, in CORBA, there is a tight coupling between the client and the server as both must share the same interface with a stub on the client-side and the corresponding skeleton on the server side. Composing a system from a number of components is relatively controlled compared to dynamic service composition. Moreover, CBD assumes early binding of components, i.e., the caller unit knows exactly which component to contact before runtime. SOA adopts a more flexible approach where the binding is deferred to runtime, enabling the change of source of provision each time. Most significantly, the idea of the service model differs from CBD by the fact that SOA supports
1071
Challenges and Solutions for Complex Business Process Management
the logical separation of service need from service fulfillment. Delivering software as a service brings about new business models and opportunities of software development and service provision in demand-driven dynamic environment.
Agent-Oriented computing (AOc) Recently the Agent-Oriented Computing paradigm has gained popularity among researchers who attempt to develop complex systems for business process management (Jennings et al., 2002; Wang et al., 2005b). Terms such as “autonomous agent” and “agency” are now commonly used in computer science literature. On the other hand, a rich body of literature on the concept of Agency and the role of agents already exists in the institutional economics and business field. We attempt to reconcile the various terms from the two research traditions. Actor is someone who performs an act, i.e., does something. An actor may be a person, an organizational unit, or a computer program. An actor may be completely autonomous, that is, it acts of its own volition. If the actor is authorized to do something on behalf of someone else, the actor is an “agent” of the other party. Agent is an actor (performer) who acts on the behalf of a principal by performing a service. The agent provides the service when it receives a request for service from the principal. The principal-agent relationship is found in most employer-employee relationships. A classic example of agency relationship occurs when stockholders hire top executives to run the corporation on their behalf. To manage the relationship between a principle and an agent of the principle, agency theory is concerned with various mechanisms used for aligning the interests of the agent with those of the principal such as piece rates/commissions and profit sharing (Eisenhardt, 1989). Broker is a special type of agent that acts on behalf of two symmetrical parties or principals – the buyer and seller. A broker mediates between the buyer (service requesting party) and the seller
1072
(service providing party). Acting as an intermediary between two or more parties in negotiating agreements, brokers use appropriate mediating techniques or processes to improve the dialogue between the parties, aiming to help them reach an agreement. Normally, all parties must view the mediator as neutral or impartial. Autonomy is the power or right of self-government. It refers to the capacity of a rational individual to make an informed, uncoerced decision. “Autonomous” means that the actor is independent, i.e., the actor can decide what to do and how to do it. An autonomous agent therefore is a system situated in, and part of, an environment, which senses the environment, and acts on it, over time, in pursuit of its agenda as derived from its principal. As an agent acts on behalf of the principal, the agent cannot be fully autonomous. The principal may give the agent different levels of choice in performing the task. For example, the principal can tell the agent what to do, but leave it to the agent to decide as to how to do it. Software Agent. In computer science, the term “agent” is used to describe a piece of software or code that acts on behalf of a human user or another program in a relationship of agency. It may denote a software-based entity that could enjoy the some properties of autonomy (agents operate without the direct intervention of its principal humans), social ability (agents communicate with other agents), reactivity (agents perceive their environment and respond to changes in a timely fashion), and pro-activity (agents do not simply act in response to their environment, but are able to exhibit goal-directed behavior by taking some initiative) (Jennings et al., 2002). The agent-based computing paradigm is devised to help computers know what to do, solve problems on behalf of human beings, and support co-operative working. The behavior of software agents is empowered by human and implemented by software. Agent-Oriented Computing (AOC) is based on the idea of delegating tasks and responsibility of a complex problem to a group of software agents. It
Challenges and Solutions for Complex Business Process Management
emphasizes autonomy and mutual co-operation of agents in performing tasks in open and complex environments. A complex system can be viewed as a network of agents acting concurrently, each finding itself in an environment produced by its interactions with the other agents in the system. AOC is used to model and implement intelligent solutions to semi- or ill-structured problems, which are too complex to be completely characterized and precisely described. AOC offers a natural way to view and describe systems as individual problem-solving agents pursuing high-level goals defined by their principals. It represents an emerging computing paradigm that helps understand and model complex real-world problems and systems, by concentrating on high-level abstractions of autonomous entities (Wooldridge et al., 1999). AOC vs. OOP. From a software engineering point of view, Object-Oriented methodologies provide a solid foundation for Agent-Oriented modeling. AOC can be viewed as a specialization of OOP. OOP proposes viewing a computational system as made up of modules that are able to communicate with one another. AOC specializes the framework by representing the mental states and rich interactions of the modules (agents). While objects emphasize passive behavior (i.e., they are invoked in response to a message), agents support more autonomous behavior, which can be achieved by specifying a number of rules for interpreting the environmental states and knowledge for governing multiple degrees of freedom of activities. In relation to this, mechanisms of knowledge acquisition, modeling, and maintenance have become important foundation for building of autonomous agents. AOC vs. SOA. Software agent is a softwarebased entity that enjoys the properties of autonomy, social ability, reactivity, and pro-activity. Web service is a software application in the Web based on open standards. Though both of them are computer applications that perform tasks on behalf of principals (human beings or other programs), the focus of software agents is on their autonomous
properties for solving complex problems, while Web services are characterized by their open access standards and protocols over the Internet. While a Web service may only know about itself, agents often have awareness of other agents and their capabilities as interactions occur among the agents. Agents are inherently communicative, whereas Web services are passive until invoked. Agents cooperate autonomously and flexibly, and by forming teams and coalitions can assemble higher-level and more comprehensive services. However, current standards or languages for Web services do not provide for flexible composing functionalities, such as brokering and negotiation in e-marketplaces (Huhns, 2002). Web services are inherently less autonomous and independent than software agents. Against this background, there is a movement towards combining the concept of Web services with software agents. W3C introduced a concept, where software agents are to be treated as the foundation for Web services architecture - “A Web service is an abstract notion that must be implemented by a concrete agent”. AOC may take SOA into new dimensions to model autonomous and heterogeneous components in uncertain and dynamic environments. The integration of Web services with software agents can function as computational mechanism in their own right, thus significantly enhancing the ability to model and construct complex software systems. It will be a promising computing paradigm for efficient enterprise service selection and integration.
reconciling OOP, cbD, sOA, and AOc From OOP and CBD to SOA and AOC, the practice of software programming has evolved through different development paradigms. At the conceptual level, these concepts and approaches are complementary and built upon each other; all have a role to play in designing and managing software systems. Each method shift
1073
Challenges and Solutions for Complex Business Process Management
came about in part to deal with greater levels of software complexity. In all cases, the way to manage complexity is by decomposing a complex system or process into smaller modules that can be designed independently, i.e., modularization. Modularization ensures easy maintenance and updates of complex systems by separating the high-frequency intra-module linkages from the low-frequency inter-module linkages, limiting the scope of interactions between the modules by hiding the intra-module relations inside a module box. Based on the idea of modularity, constructs such as objects, components, software agents, and Web services have been continuously invented and evolved for developing software applications. OOP provides a foundation for software engineering that uses objects and their interactions to design applications and computer programs. CBD provides a coarser grained construct for larger systems, and separates interface from the behavior of the construct for supporting public communication between the components which know about each other before runtime. SOA goes further by using XML-based and network-addressable interface as well as XML-based messages and standard protocols for open communication among all software applications in Internet. Different from OOP, CBD, and SOA, AOC is used to model and implement solutions to semi- or ill-structured problems, which are too complex to be completely characterized and precisely described. Agent is used to perform more autonomous activities in solving complex problems. To achieve this, knowledge or rules for governing the behavior of an agent are separated from the behavior of the agent. In computer science, the terms object, component, software agent, and Web service describe a piece of software that performs some action on behalf of human beings, like an agent or actor. In addition to an actor, the agent can also be a broker, which mediates between the service requesting party and the service providing party. In terms of broker, software agent can be used to search appropriate applications to perform requested
1074
services. This special type of agent works as an intermediary between service requesters and service providers, coordinating on behalf of the two parties by taking into account of service requirements, qualities, costs, constraints, and so on.
APPLyING MODULAr cOMPUtING tEcHNOLOGIEs IN cOMPLEX bPM In the 5th section we propose the DCAR architecture for developing complex process management systems, which includes decomposition of complex processes (D); task coordination (C); awareness of environmental changes (A); and resource selection and coordination (R). This provides the foundation to identify the BPM requirements for employing appropriate technologies in developing flexible BPM systems. In the 6th section we clarify and explicitly define the similarities and differences between the four modular technologies: OOP, CBD, SOA, and AOC. In this section we will show how these technologies can be used to implement the proposed BPM architecture.
Decomposition of complex Processes (D) Business processes display complexity as a result of interactions of their internal components and interaction of the process with its environment. A process can be decomposed into a set of tasks, task into sub-tasks, and so on, through several layers in a network hierarchy. Tasks or sub-task components can be delegated to software objects, components, agents, and services, as actors of the tasks, which interact and communicate in performing the process. To deal with interactions across different organizations, SOA proposes “service” as a high level view of the building block of a process. A process is composed of a set of services, each of which is provided by an individual organization. By using SOA, the inter-service interactions are
Challenges and Solutions for Complex Business Process Management
separated from intra-service interactions; the complexity of both is maintained at different layers. Moreover, we can take advantage of reusability, inter-operability, and extensibility of Web services on the basis of open standards to cater for business process integration and interoperation over the Web. The highly dynamic and unpredictable nature of business processes makes agent-based approach appealing. AOC assigns business applications’ main activities to autonomous agents. Such agents are flexible problem solvers that have specific goals to achieve and interact with one another to manage their autonomy and interdependency in business processes. AOC is well suited for complex process situations that are not all known a priori, cannot be assumed to be fully controllable in their behaviors, and must interact on a sophisticated level of communication and coordination (Wang et al., 2005b).
of autonomous software entities, which are able to perform decision-based coordination of their activities. In AOC, after decomposing a complex process into a number of loosely coupled tasks in a flat hierarchy, we may delegate the tasks to a number of autonomous agents, each working both autonomously and collaboratively throughout the whole process. In complex process management, it is impossible to predefine all activities and interactions at design time. Instead, we define the goal or role of each agent, and specify a set of rules for governing the behavior of the agent. Agents operate asynchronously and in parallel. This results in an increase in overall speed and robustness in BPM. The failure of one agent does not necessarily make the overall system useless, where other agents may adjust and coordinate their behavior reactively and proactively to the change.
Flexible task coordination (c)
Awareness of Dynamic Environments (A)
A business processes is made up of a number of task components that interact with dynamic behavior. OOP uses objects to hide behavior and data, supporting communications among small objects, e.g., functions of tasks. CBD extends OOP by supporting interaction among components (e.g., tasks), i.e., coarser grained constructs, using public communication interface. SOC goes further by using XML-based and network-addressable interface as well as XML-based messages and standard protocols for open communication among BPM applications over the Internet. While OOP, OBD, and SOA mainly support structured communications among tasks or task components, AOC are able to support ill-structured interactions. To coordinate the interactions in dynamic situations, flatter hierarchies, decentralized autonomous units, and decision-based coordination mechanisms are required, where AOC is directly applicable. AOC supports decentralized control and asynchronous operations by a group
The complexity of business processes comes not only from interactions of their internal components, but also from interaction of the process with its environment. To manage business processes in a dynamic environment, we need to be able to continuously perceive the environment, and make real-time decisions on the process. Objects, components, and services are normally unable to behave in dynamic environments. Agent-based software entity is able to sense and recognize the situation and determine appropriate actions upon the situations. Unlike the ECA (event-condition-action) rules in workflow systems that enable reaction to certain events, AOC goes further by incorporating all environmental information into a mental state that watches over the whole environment. Individual events are put together for a comprehensive understanding; ambiguous information is understood after appropriate interpretation and reasoning (Wang et al., 2006a). As shown in Figure
1075
Challenges and Solutions for Complex Business Process Management
3, information about the environment (e.g., events, activities, and resources) is sensed and interpreted by the agent on the basis of predefined scheme and rules. In case that information is unanticipated or comes as a complete and total surprise, it will be sent to human manager for manual processing. Moreover, AOC supports prediction of future state of the environment for purpose of proactive actions. Different from passive response to current events, proactive behavior has an orientation to the future, anticipating problems and taking affirmative steps to deal with them rather than reacting after a situation has already occurred. It refers to the exhibition of goal-oriented behaviors by taking initiatives.
Flexible resource coordination (r) As discussed, the rise of Internet-mediated eBusiness brings the era of demand-driven and resource-based soft connections of business organizations. A business process can be dynamically established by connecting or composing services provided by different organizations. SOA provides a real platform for resource selection and allocation in order to implement seamless and flexible integration of business processes over the Internet. However, it is a complex problem to search appropriated services from a large number of resources as well as schedule and coordinate them
under various constraints. The complexity arises from the unpredictability of solutions from service providers (e.g., availability, capacity, and price), the constraints on the services (e.g., time and cost constraint), and interdependencies among the services. A service solution to an individual service involved in an integrated process may not have a view of the whole service, very often resulting in incoherent and contradictory hypotheses and actions (Wang et al., 2006b). To deal with the problem, AOC can be used for decentralized decision making and coordination. In process integration, decision-making and coordination among services can be modeled as a distributed constraint satisfaction problem, in which solutions and constraints are distributed into a set of services and to be solved by a group of agents (brokers) on behalf of service requesters and providers. In this context, service-based process integration is mapped as an agent-mediated distributed constraint satisfaction or optimization problem. Individual services are mapped to variables, and solutions of individual services are mapped to values. A distributed constraint satisfaction or optimization problem consists of a set of variables, each assigned to an agent, where the values of the variables are taken from finite and discrete domains. Finding a global solution to an integrated process requires that all agents find the solutions that satisfy not only their own
Figure 3. Agent-based Process Management in Dynamic Environment
1076
Challenges and Solutions for Complex Business Process Management
constraints but also inter-agent constraints (Wang et al., 2008b).
cONcLUsION In this paper, we have investigated the challenges and solution for complex BPM. Based on the analysis of business process complexity and the modularity theory for complex systems, we present the DCAR architecture for developing BPM systems in turbulent environment. On the other hand, we investigated relevant modular programming technologies that can be used to model and develop complex systems. We analyze the overlapping technical concepts and techniques as well as clarify the differences and relationships between these terminologies and techniques in the context of BPM. Based on the examination of 1) the requirements of complex BPM based on the DCAR architecture and 2) the supporting technologies for complex systems, we have made a clear picture with a systemic approach on how these technologies can be applied and integrated in developing systems for complex process management. This work will benefit professionals, researchers, and practitioners by advanced analysis and theoretical investigations of problems and solutions in developing solutions for complex BPM.
AcKNOWLEDGEMENt This research is supported by a General Research Fund (No. RGC/HKU7169/07E) from the Hong Kong SAR Government, and a Seed Funding for Basic Research (200611159216) from The University of Hong Kong.
rEFErENcEs Baldwin, C. Y., & Clark, K. B. (1997). Managing in an age of modularity. Harvard Business Review, 75(5), 84–93. Casati, F., Ceri, S., Paraboschi, S., & Pozzi, G. (1999). Specification and implementation of exceptions in workflow management systems. ACM Transactions on Database Systems, 24(3), 405–451. doi:10.1145/328939.328996 Casati, F., & Shan, M. (2001). Dynamic and adaptive composition of e-services. Information Systems, 26(3), 143–163. doi:10.1016/S03064379(01)00014-X Chiu, D. K. W., Li, Q., & Karlapalem, K. (1999). A meta modeling approach for workflow management system supporting exception handling. Information Systems, 24(2), 159–184. doi:10.1016/ S0306-4379(99)00010-1 Eisenhardt, M. K. (1989). Agency theory: An assessment and review. Academy of Management Review, 14(1), 57–74. doi:10.2307/258191 El Sawy, O. A. (2001). Redesigning enterprise processes for e-business. Boston: Irwin/McGraw-Hill Elfatatry, A. (2007). Dealing with changes: Components versus services. Communications of the ACM, 50(8), 35–39. doi:10.1145/1278201.1278203 Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32–64. doi:10.1518/001872095779049543 Garud, R., & Kumaraswamy, A. (2003). Technological and organizational design for realizing economies of substitution. In R. Garud, A. Kumaraswamy, & R.N. Langlois (Eds.), Managing in the modular age: Architectures, networks, and organizations (pp. 45-77). Blackwell Publishing Limited.
1077
Challenges and Solutions for Complex Business Process Management
Hammer, M., & Champy, J. (1993). Reengineering the corporation: A manifesto for business revolution. London: Brealey. Holland, J. (1995). Hidden order: How adaptation builds complexity. Cambridge, MA: Perseus. Huhns, M. N. (2002). Agents as Web services. IEEE Internet Computing, 6(4), 93–95. doi:10.1109/MIC.2002.1020332 Jennings, N. R., Faratin, P., Norman, T. J., O’Brien, P., & Odgers, B. (2002). Autonomous agents for business process management. International Journal of Applied Artificial Intelligence, 14(2), 145–189. Kammer, P. J., Bolcer, G. A., Taylor, R. N., Hitomi, A. S., & Bergman, M. (2000). Techniques for supporting dynamic and adaptive workflow. Computer Supported Cooperative Work, 9(3-4), 269–292. doi:10.1023/A:1008747109146 Kumar, A., & Zhao, J. L. (2002). Workflow support for electronic commerce applications. Decision Support Systems, 32(3), 265–278. doi:10.1016/ S0167-9236(01)00114-2 Kumar, K. (2001). Technology for supporting supply chain management: Introduction. Communications of the ACM, 44(6), 58–61. doi:10.1145/376134.376165 Kumar, K., & Narasipuram, M. M. (2006). Defining requirements for business process flexibility. In Seventh Workshop on Business Process Modeling, Development, and Support. CAiSE Kumar, K., & van Dissel, H. (1996). Sustainable collaboration: Managing conflict and cooperation in interorganizational systems. MIS Quarterly, 20(3), 279–300. doi:10.2307/249657
1078
Kumar, K., van Fenema, P. C., & von Glinow, M. A. (2007). Offshoring and the global distribution of work: Implications for task interdependence theory and practice. In First Annual Research Conference and Workshop on Offshoring. North Carolina Leymann, F., Roller, D., & Schmidt, M. T. (2002). Web services and business process management. IBM Systems Journal, 41(2), 198–211. Melão, N., & Pidd, M. (2000). A conceptual framework for understanding business processes and business process modeling. Information Systems Journal, 10(2), 105–129. doi:10.1046/j.13652575.2000.00075.x Mintzberg, H. (1979). The structuring of organizations. NJ: Prentice Hall. Mowshowitz, A. (1997). Virtual organization. Communications of the ACM, 40(9), 30–37. doi:10.1145/260750.260759 Petrie, C. J., & Bussler, C. (2003). Service agents and virtual enterprises: A survey. IEEE Internet Computing, 4, 68–78. doi:10.1109/ MIC.2003.1215662 Rumbaugh, J. (1991). Object-oriented modeling and design. Englewood Cliffs, N.J.: Prentice Hall Sanchez, R., & Mahoney, J. T. (2003). Modularity, flexibility, and knowledge management in product and organization design. In R. Garud, A. Kumaraswamy, & R.N. Langlois (Eds), Managing in the modular age: Architectures, networks, and organizations (pp. 362-389). Blackwell Publishing Limited. Schilling, M. A. (2003). Towards general modular systems theory and its application to interfirm product modularity. In R Garud, A. Kumaraswamy, & R.N. Langlois (Eds.), Managing in the modular age: Architectures, networks, and organizations (pp. 172-216). Blackwell Publishing Limited.
Challenges and Solutions for Complex Business Process Management
Simon, H. A. (1977). The new science of management decision. Englewood Cliffs, N.J.: PrenticeHall Simon, H. A. (1981). The sciences of the artificial. Cambridge, MA: MIT Press. Simon, H. A. (2003). The architecture of complexity. In R Garud, A. Kumaraswamy, & R.N. Langlois (Eds), Managing in the modular age: Architectures, networks, and organizations (pp. 15-44). Blackwell Publishing Limited. Szyperski, C. (2002). Component software: Beyond object-oriented programming. Bosoton: Addison-Wesley Professional. van der Aalst, W. M. P., & van Hee, K. M. (2002). Workflow management: Models, methods, and systems. Cambridge, MA: MIT Press. Volberda, H. W. (1999). Building the flexible firm: How to remain competitive. Oxford University Press Wang, M., Cheung, W. K., Liu, J., Xie, X., & Lou, Z. (2006b). E-Service/process composition through multi-agent constraint management. International Conference on Business Process Management (BPM 2006) (LNCS 4102, pp. 274-289). Wang, M., & Kumar, K. (2008a). Developing flexible business process management systems using modular computing technologies. In Proceedings of Eighth Global Conference on Flexible Systems Management (GlOGIFT-08). Hoboken, NJ. Wang, M., Liu, J., Wang, H., Cheung, W., & Xie, X. (2008b). On-demand e-supply chain integration: A multi-agent constraint-based approach. Expert Systems with Applications, 34(4), 2683–2692. doi:10.1016/j.eswa.2007.05.041
Wang, M., & Wang, H. (2002). Intelligent agents supported flexible workflow monitoring system. In Proceedings of the14th International Conference on Advanced Information Systems Engineering (CAiSE’02) (LNCS 2348, pp. 787-791). Wang, M., & Wang, H. (2005b). Intelligent agent supported business process management. In Proceedings of 38th Hawaii International Conference on System Sciences (HICSS-38). IEEE Computer Society Press. Wang, M., & Wang, H. (2006a). From process logic to business logic - A cognitive approach to business process management. Information & Management, 43(2), 179–193. doi:10.1016/j. im.2005.06.001 Wang, M., Wang, H., & Xu, D. (2005a). The design of intelligent workflow monitoring with agent technology. Knowledge-Based Systems, 18(6), 257–266. doi:10.1016/j.knosys.2004.04.012 Weske, M. (1998). Object-oriented design of a flexible workflow management system. 2nd East-European Symposium on Advances in Databases and Information Systems (LNCS 1475, pp. 119-130). Weske, M. (2001). Formal foundation and conceptual design of dynamic adaptations in a workflow management system. In Proc. HICSS-34. Maui, Hawaii Wolters, N. J. (2002). The business of modularity and the modularity of business. PhD Thesis, Erasmus Research Institute of Management, Rotterdam Wooldridge, M., & Jennings, N. R. (1999). software engineering with agents: Pitfalls and pratfalls. IEEE Internet Computing, 3(3), 20–27. doi:10.1109/4236.769419
1079
Challenges and Solutions for Complex Business Process Management
KEy tErMs AND DEFINItIONs Agent-Oriented Computing: Agent Oriented Computing (AOC) is based on the idea of delegating tasks and responsibility of a complex problem to software agents. It emphasizes autonomy and mutual co-operation of agents in performing tasks in open and complex environments. Business Process: A business process can be simply defined as a collection of activities that create value by transforming inputs into more valuable outputs. These activities consist of a series of steps performed by actors to produce a product or service for the customer. Business Process Management: Business Process Management (BPM) refers to activities performed by organizations to design (capture processes and document their design in terms of process maps), model (define business processes in a computer language), execute (develop software that enables the process), monitor (track individual processes for performance measurement), and optimize (retrieve process performance for improvement) operational business processes by using a combination of models, methods, techniques, and tools
Component-Based Development: Component-Based Development (CBD) is software engineering discipline, with an emphasis on decomposition of the engineered systems into functional or logical components with well-defined interfaces used for communication across the components. Modularity: Modularity refers to a particular design structure, which refers to development of a complex product or process from smaller subsystems that can be designed independently. Object-Oriented Programming: Objectoriented Programming (OOP) is a software engineering paradigm that uses “objects” and their interactions to design applications and computer programs Situation Awareness: Situation awareness is the perception and understanding of objects, events, people, system states, interactions, environmental conditions, and other situation-specific factors in complex and dynamic environments Service-Oriented Architecture: ServiceOriented Architecture (SOA) utilizes Web services as fundamental elements for developing applications. It is an emerging paradigm for architecting and implementing business collaborations within and across organizational boundaries.
This work was previously published in Handbook of Research on Complex Dynamic Process Management: Techniques for Adaptability in Turbulent Environments, edited by Minhong Wang and Zhaohao Sun, pp. 1-22, copyright 2010 by Information Science Reference (an imprint of IGI Global).
1080
1081
Chapter 4.13
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management Mingzhong Wang University of Melbourne, Australia Jinjun Chen Swinburne University of Technology, Australia Kotagiri Ramamohanarao University of Melbourne, Australia Amy Unruh University of Melbourne, Australia
AbstrAct This chapter proposes a multiple-step backtracking mechanism to maintain a tradeoff between replanning and rigid backtracking for exception handling and recovery, thus enabling business process management (BPM) systems to operate robustly even in complex and dynamic environments. The concept of BDI (belief, desire and intention) agent is applied to model and construct the BPM system to inherit its advantages of adaptability and flexibility. Then, DOI: 10.4018/978-1-60566-669-3.ch010
the flexible backtracking approach is introduced by utilizing the beneficial features of event-driven and means-end reasoning of BDI agents. Finally, we incorporate open nested transaction model to encapsulate plan execution and backtracking to gain the system level support of concurrency control and automatic recovery. With the ability of reasoning about task characteristics, our approach enables the system to find and commence a suitable plan prior to or in parallel with a compensation process when a failure occurs. This kind of computing allows us to achieve business goals efficiently in the presence of exceptions and failures.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
INtrODUctION A critical challenge in building practical business process management (BPM) systems is to allow users to maintain the system robustness and reliability with respect to the correct execution even in the presence of abnormalities. As the operating environment becomes increasingly complex, dynamic and error-prone, it is extremely challenging for designers and programmers to find out all possible combinations of exceptions as well as designing corresponding handling methods. Therefore, a flexible, systematic and autonomic approach for exception handling is essential for the success of applying complex BPM systems into wider fields of application. Multi-agent systems have been extensively studied as a powerful high-level decomposition and abstraction tool in analyzing, designing, and implementing complex software systems (Jennings, 2001). Many researchers and practitioners have noticed the fundamental relationship between agents and workflow systems (Ehrler, Fleurke, Purvis, & Savarimuthu, 2006) and proposed various approaches to build dynamic and adaptive workflow systems with the concept of agents, and vice versa. To manage the complexity arisen from the dynamic and complex running environment, we propose to apply the reactive BDI (Belief, Desire and Intention) (Rao & Georgeff, 1995) agent system to model and construct BPM systems, thus benefiting from its sound features of eventdriven and means-end reasoning for the purpose of robustness, flexibility and adaptability. Providing a higher level of abstraction, agents help to simplify modeling and construction of flexible BPM systems operating in open and distributed environments. To deal with exceptions, most multi-agent systems apply the mechanism of backtracking to recover and retry the failed execution. However, they back track in a rigid step-by-step manner until one alternative ex-
1082
ecution path is found. Moreover, they follow a defensive route of recovery-then-try pattern. Due to the dynamic and nondeterministic features of complex environments, rigid recovery on the reverse chronological order of execution history will in many instances become meaningless and inappropriate. Some attempt to consider starting a fresh plan after the exception. However, this approach is too computationally expensive and rarely practical to get deployed because it requires a complete knowledge about the running environment and needs to consider all previous actions. As a result, developers are forced to consider low-level details of disturbances, failure, or uncontrolled interactions between workflow actors for the requirement of robustness and reliability. To address this issue, we propose to extend existing rigid backtracking strategy to support and enable execution backtracking in a multiple-step fashion (reverse chronological order with certain steps skipped). Instead of step-by-step backtracking through the execution tree, the system can “jump” back to an arbitrary history node and try another eligible execution path. Our approach maintains a tradeoff between replanning and traditional backtracking strategies. To provide automatic and system-level support for concurrency control and exception handling, open nested transaction (Weikum & Schek, 1992) is integrated into our multiple-step backtracking model. Compared with other approaches which apply transaction models (Gray & Reuter, 1993) into workflow (Georgakopoulos, Hornick, & Manola, 1996) or multi-agent systems (Ramamohanarao, Bailey, & Busetta, 2001; Nagi, 2001), our method unites the beneficial features of event-driven and means-end reasoning from BDI agent systems and utilizes a flexible backtracking approach to allow the execution “jumping” back several levels at once to continue its execution towards the goal in case that backtracking to one level in the execution tree does not solve the problem.
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
bAcKGrOUND bDI Agents and Exception Handling Software agents have the ability to provide autonomous and reactive behaviors, and to support decomposition and abstraction of functionality, making agent technology useful in analyzing, designing, and implementing complex software systems (Jennings, 2001). They are persistent entities that can perceive, reason and act in some environment. Often, agents are autonomous, reactive, and sociable (Woolridge, 2001). In order to push agent technology into the mainstream of software development, various agent architectures (Woolridge, 2001) and agent programming languages have been proposed. Among them, BDI (Rao & Georgeff, 1995) is probably the most mature and accepted model. Most BDI platforms with PRS (Procedural Reasoning System) style share the following three features. •
An agent contains four key data structures as shown in Figure 1. Beliefs are the informational state representing what an agent knows about itself and the world which may be incomplete or even incorrect. Goals are the motivational state and correspond to what the agent wants to achieve. Plans represent the procedural knowledge
•
•
about how to achieve a certain goal or react to a specific situation. Intentions are selected plans for execution and represent the deliberative state of an agent. The execution of an agent is event driven. Plans, which are usually represented in the form of event ¬ preconditions | action_sequence, are defined to react to a certain event which can be internal modifications to its goals and beliefs or external changes of environment. After event is triggered, the preconditions will be tested before action_sequence can be chosen for execution. Because events can occur non-deterministically, plans are executed reactively. The execution path to achieve a goal of an agent is generated by means-end reasoning. That is, the goal is treated as an initial event triggering a corresponding plan to run. action_sequence of the plan may contain primitive actions as well as subgoals. All sub-goals will be in turn treated as events to trigger sub-plans to run. This process continues recursively until all actions in sub-plans are primitive or atomic.
The execution process of a BDI agent can be abstractly depicted in Figure 2. This process is described by (Rao & Georgeff, 1995) and applied directly or indirectly by dMARS (d’Inverno et al.,
Figure 1. PRS-style structure for BDI agent. (Adapted from (d’Inverno, Luck, Georgeff, Kinny, & Wooldridge, 2004))
1083
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
Figure 2. Event driven and means-end reasoning for BDI system execution
2004) and 3APL (Dastani, Riemsdijk, Dignum, & Meyer, 2003). The plan matcher will retrieve an event from the queue and search through the plan library to find the set of plans which can handle this event in that certain situation (determined by its beliefs). There might be more than one suitable plan found, and the plan selector will choose one of them and append it into the intention stack. Finally, the intention is executed, which results in internally updating the BDI state, including beliefs, desires and intentions, or externally operating to sense and change the environment. Since agents usually work in open, dynamic and error-prone environment, they are more liable to conflicts and eventually failures. Without dealing with these problems appropriately, agent system can only remain as an experimental toy. Therefore, substantial effort is carried out with attentions to automate exception handling and execution resumption, thus making agent programming a serious platform to developers for developing complex applications. On one extreme, replanning is proposed to have a fresh start by throwing away all the work that has been previously done in planning. However, it is too computationally expensive and rarely practical because it requires complete knowledge about the environment and business domain and involves a lot of programming efforts. On the
1084
other extreme, backtracking is applied to go back through all the work that previously been done in planning in a depth-first searching manner. Despite the feasibility of backtracking requires a rigid execution tree structure, it becomes the most widely applied for exception handling in multi-agent systems because it fits nicely to the hierarchical program architecture and enables systematic and platform-level recovery support. To back track one step in the execution tree needs some compensation. Many existing agent programming languages provide some basic support for modeling exceptions and their compensations. They usually use a specified event to trigger a new plan to define compensating actions. For example, 3APL (Dastani et al., 2003) allows using plan revision rules to define compensating actions for a specified event. dMARS (d’Inverno et al., 2004) allows encoding of maintenance conditions, as well as success and failure actions, into a task definition. However, all of them lack a systematic way to organize and manage those exceptions and compensations, making complex system design and interaction management difficult and sometimes impossible. Some researchers propose to apply transaction concepts to help build systematic and automatic platform for robust agent execution. TOMAS (Ramamohanarao, Bailey, & Busetta, 2001) applies a nested transaction model as the
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
concurrency control and recovery mechanism to avoid performing conflict update operations on the agent. However, since a nested transaction model requires full control on the resources and no exposure of partial results, it is not feasible in most situated agent systems. Even if all agents perform database manipulation only, the model is still too rigid to be practical in a multi-agent system, because long-running activities can lead to locking resources for very long periods and can cause deadlocks. (Nagi, 2001) takes a more workflow-like approach by treating every action as an ACID (Atomicity, Consistency, Isolation and Durability) entity and putting them into open nested transaction model to create transaction trees. ECA (Event, Condition, and Action) rules are used to link two agents if one uses the partial results of another. But this approach does not allow changes of execution plans, which are quite common in agent applications. What is more, arbitrary and unstructured linkages among different agents breach the design principles of modularization and loose coupling. As an extension to the compensation concept in advanced transaction models, (Unruh, Bailey, & Ramamohanarao, 2004) discusses the use of goal-based semantic compensation in the context of agents. Compared with transaction-based solutions, Guardian (Tripathi & Miller, 2001) and Citizens (Klein, Rodriguez-Aguilar, & Dellarocas, 2003) separate the mechanisms and knowledge about exception handling from the agent system to a centralized exception manager. However, this separation only works well for some domainindependent exceptions. SaGE (Souchon, Dony, Urtado, & Vauttier, 2004) proposes to organize agents as well as their exception handlers in a hierarchical structure, but it lacks a precise model of what to do after the exception has been handled based on the exception-handling plan and the result of executing that plan.
Exception Handling in Workflow system A primitive form of multi-agent system can be viewed as a workflow model in the sense that it has a predefined finite (static) execution flow. However, they share the same problem with regard to exception handling. It is therefore essential for the success of a workflow management system that it provides a powerful yet easy-to-use mechanism for maintaining system robustness and reliability, with respect to the correct execution of tasks even in the presence of abnormalities and exceptions. (Georgakopoulos, Hornick, & Manola, 1996) argues that Customized Transaction Management (CTM) is one of the key infrastructure technologies for effective workflow management system. CTM can ensure the correctness and reliability of applications implementing business or information processes, while permitting the functionality each particular process requires (e.g., isolation, coordination, or collaboration between tasks). The concept of transaction is a concise but powerful abstraction tool with the properties of concurrency control and failure atomicity. Traditional transactions have ACID properties (Gray & Reuter, 1993), which prevent inconsistency and integrity problems. Atomicity ensures that either all or none of operations of a transaction are performed. Consistency ensures that a transaction maintains its integrity constraints on the objects it manipulates. Isolation ensures that a transaction executes as if it were running alone in the system and intermediate transaction results are hidden from other concurrently executing transactions. Durability ensures the changes once made by successfully completed transactions are persistent even when systems crash. In database applications, each transaction is enforced to have the ACID properties. Therefore, whenever there is an exception occurring, all work already done in the transaction is aborted and the system is restored to the starting state as if noth-
1085
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
ing has happened. However, ACID transactions require an activity to obtain full control of, and exclusive access to, its resources. In contrast to database applications where transactions can lock records to get exclusive access, and can restore any data from history, workflow tasks in nature usually work in an open environment and operate on physical objects where actions “always commit” and it is impractical or even impossible to satisfy either requirement. For example, a flight reservation task cannot lock the schedule to avoid flight changes or restore a bank account to the original amount by itself when cancelling a booking. Thus, traditional transaction mechanisms need to be extended for an open and shared environment before they can be applied to workflow systems. The extended transaction models in workflow systems usually focus on the inter-dependency among the participating tasks. The overall workflow process is treated as a big transaction which organizes all participating tasks in a tree structure to ensure the criterion of correctness and reliability. Each task node is considered as subtransaction which may not be required to hold all ACID properties. The relationships among these sub-transactions are depicted by task dependencies which can ensure the execution order and execution correctness. Based on the dependencies among tasks, the workflow management system can generate plans to execute. Theoretically when exceptions occur, the system can retrieve all affected tasks and coordinate them to handle the exceptions together. Generally, the failure of a task at a given level may or may not affect its parent step. If it does, then the parent is to be rolled back and the procedure is repeated until a parent task is reached that is not affected. From this point on, a parent task may try an alternative child step to compensate or restart. Then the overall process is a two-phase remedy where the first phase, called the bottom-up phase, determines the highest ancestor task affected by the failure of the current task and a second phase, called the top-down phase, undoes the changes at
1086
each level starting from that ancestor (Kamathy & Ramamritham, 1996). Another proposal is called sphere of joint compensation (Kamathy & Ramamritham, 1996). A collection of tasks in a workflow is grouped into a sphere S such that either all the tasks of S complete successfully or all of them are compensated. Thus a sphere is basically a failure-atomic unit. Spheres can overlap and be nested. Actually spheres are relaxed transactions with the property of atomicity and consistency. The problem is how to define the scope of the sphere. Although these methods can guarantee the correctness of the system execution, they do not consider issues related to the context changes arising from dynamic and complex environment. For example, when a branch of the execution tree fails, other possible solutions need to wait until the recovery completes. If the running environment for recovery has become different from initial execution or design assumption, the overall system is taken down even though sometimes there are clear other approaches existing to continue the execution for achieving the business goals. As such, a more flexible recovery mechanism is required to organize transactional recovery and forward execution in complex interacting systems.
A Motivating Example A travel management system with the goal of preparing a holiday for the customers is illustrated in Figure 3. All alternatives at the choice points are eligible for selection. Let us assume the system chooses California as the tour destination from all three possible places. Then it tries to book the flight and the hotel. After that, it chooses Sailing as the entertainment plan. Finally, it begins to rent a yacht. Unselected branches during the execution are cut off to simplify the presentation. If yacht renting fails, traditional backtracking mechanism will go back to the choice point of Entertainment and try Disneyland. However, if tickets for the Disneyland are sold out, the system
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
Figure 3. Travel management system
needs to keep on backtracking by undoing hotel booking and flight booking to the choice point of Destination and try either Hawaii or Hongkong as the new destination. However, this rigid step-by-step backtracking approach is sometimes not appropriate when the environment keeps on changing. First, its requirement of keeping the execution structure static for recovery purpose is hard to fulfill in a dynamic and non-deterministic environment, and recovery on the reverse order of the execution history will probably become meaningless in many situations. For example, the closure of the hotel after the booking has been made will invalidate any rollback or compensation attempts and result in the discontinuity of achieving the goal of holiday travel. Second, after exceptions occur, it requires complete compensation of failed plans before trying any further efforts to achieve the goal. Because applying other plans to approach the goal may not always conflict with the process of compensation, this requirement results in inefficiency with respect to goal achieving and sometimes brings unnecessary costs.
Addressing these drawbacks, we propose a multiple-step backtracking mechanism for recovering exceptions occurred in BPM systems running in dynamic and open environments. In our approach, when the system knows going back one level in the execution tree can not deal with the exception, it can jump back several levels at once instead to continue its execution towards the goal. We will also show how open nested transaction model is integrated into our flexible backtracking approach to provide systematic and automatic support for exception handling and concurrency control.
bDI-styLE EXEcUtION MODEL FOr bPM In order to operate in complex and dynamic environment, the execution tree of a business process must be generated at run-time reactively to its surroundings. We propose to apply BDI framework to construct the dynamic composition of individual autonomous participants in the system to achieve a certain business goal.
1087
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
The application domain has a predefined set of business rules. For a certain goal in a specified environment, it is broken down into sub-goals by matching it with corresponding plans. Each subgoal can be achieved by delegating to some agents, or being decomposed further with appropriate plans. Compared with conventional workflow systems, the proposed BPM execution model is more dynamic and adaptive to the environment because the execution tree is constructed at run time according to the real situation. For the purposes of the approach we describe in this chapter, we can assume without loss of generality a plan is a task to achieve. Thus, we will use task and plan interchangeably. During the plan matching process for a goal, there may exist more than one feasible plan. Then, it is said the system has a choice point which is a key concept in our model. Definition 1. A choice point of the execution tree records the applicable plans found by plan matcher to deal with a certain goal. It is denoted as cpgoal = {p1, p2 , , pn } where pi is an applicable plan. Scp (i ) stands for the selection of plan pi by plan selector. Because the BPM system is modeled and built by following BDI agent systems, its execution shares the same features as multi-agent systems. Principle 1. After a plan has made a change to the environment, if the whole system is still in a semantically consistent state satisfying system constraints, this state is acceptable even though the actions are later shown to be futile in working towards a goal. This rule allows the execution of the BPM system to be treated in terms of independent small plan fragments. Each such fragment can be encapsulated in a transaction which can externalize its result after termination without causing long running transactions. For example, even if the travel management system cannot arrange any entertainment in California, the existence of flight and hotel booking does not prevent it from trying other options. Moreover, it is more important to 1088
achieve the goal of holiday arrangements for the system than to undo its payment of the booking deposit. Principle 2. Cleaning up a side effect of a failed selection Scp (i ) can have lower priority compared with trying its alternatives, or achieving the system’s goal. Compensation is usually used to release the consumed resources which are necessary for other attempts. However, it is the approach, not the purpose. All unrelated compensations can be processed in parallel with or even after the achievement of the goal to improve system throughput and efficiency. The applicability of this principle is based on the fact that system plans are dynamically composed when their events are triggered and their preconditions are satisfied. However, in resource-bounded situation compensation may have a higher priority. Principle 3. If the system is consistent, any plan in its plan library can be selected for execution if the plan preconditions are met. These rules allow us to continue system execution at any legal choice point after current execution encounters interruption. In other words, it is not necessary to conservatively roll back before trying other paths. The execution flow just needs to “jump” back to an appropriate choice point and continue. We will give a more detailed description of this concept in the next section.
FLEXIbLE EXcEPtION HANDLING MODEL We first explain the multiple-step backtracking mechanism which maintains a tradeoff between replanning and rigid backtracking for exception handling and recovery. We then describe how we can incorporate the open nested transaction model to encapsulate plan execution and backtracking to gain the system level support for concurrency control and automatic recovery.
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
Multiple-step backtracking recovery Traditional Backtracking recovery tries to explore the solution domain in a depth-first search manner. The execution of the system is usually organized as a tree. When a task node fails, the system would go one step back to its parent by undoing or compensating the result done by the task. If there is still no solution in the parent node, the system would recursively back track to the upper level until reaching the root. Compared with this rigid search strategy, our approach can track back multiple levels at once by skipping intermediate steps according to the evaluation of the real time condition. Therefore, even in case step-by-step backtracking is invalid because of the environment changes, the system can still survive by recovering and continuing the execution flow from a higher level of control. This section at first introduces and defines the key concepts related to the flexible backtracking mechanism, and then shows the details about the backtracking algorithm for the recovery. Finally, the proposed approach is summarized and compared with the generally applied handling methods.
Key Concepts for MultipleStep Backtracking The system can use the knowledge of its set of choices at each choice point to decide which substitutable plan to choose if exceptions occur later. Definition 2. A choice point stack contains the choice points the system has met chronologically. It is denoted as cp _ stack = {cpn , cpn −1, , cp1 } where cpi is the choice point occurred at time i . When there is a failure of execution, it must happen at one branch of cpn . The system can look through its stack for another applicable branch of a certain past choice point to continue achieving its goal. If possible, the new selected branch can be identical to the failed one as a retry. We call this process a “jump”.
Definition 3. A jump is a continuation of the system execution flow at another selectable plan implied by elements of its choice point stack. To enable the jump to the jth choice of cpi , the preconditions of the plan Scp ( j ) must be satisfied. i Preconditions of a plan usually specify the required resources and execution context. For example, after the failure of hotel booking, the travel management system may find there is not enough money left for Hong Kong or Hawaii because money has been paid for air ticket to California. The failed branch ( Scp ( f ) ) and the n new selected branch ( Scp (s ) | i £ n ) may have i three types of coordination with respect to their resource share or competition. Different type of coordination leads to different processing strategy. •
•
Scp ( f ) does not consume any resources n which will be used by Scpi (s ) . That is, Scp ( f ) has no access to the resources n which must be guaranteed as preconditions of Scp (s ) . In this case, Scp (s ) can be i i started directly prior to addressing the failure. However, there could be a background thread reclaiming the resources consumed by Scp ( f ) . n Scp (s ) shares some resources with Scp ( f ) i n However, there are still enough resources left for Scp (s ) after the consumption by i Scp ( f ) . The handling method of this case n
is the same as the first type. As shown in Figure 4 (a) and (b), the execution can
•
jump directly to arrange travel at different destination because flight and hotel bookings have not spent anything, or only spent a small amount of money. The refund process can be ignored, or be carried out in parallel with the goal achieving or even after the goal of the agent has been achieved. Scp (s ) shares some resources with Scp ( f ) i n of which too much has been used for
1089
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
Figure 4. Different recovery strategy for different type of path coordination
Scp (s ) to remain applicable. In this case, i compensation must be applied first before execution can continue at Scp (s ) . As i shown in Figure 4 (c), because the booking of flight and hotel has cost too much money, the agent must obtain the refund before continuing towards the goal. Based on these different types of coordination, we have designed an innovative failure recovery algorithm to achieve non-stop execution towards the agent’s goal. It is embedded in an open nested transaction structure tailored for BDI agents, thus enabling the build of robust and reliable BPM systems with architecture-level support of concurrency control and higher efficiency and throughput to deal with exceptions.
Recovery Mechanism and Algorithm As shown in Algorithm 1, the overall execution flow of “jump” mainly consists of three parts operating on the data structure of choice point stack. (Table 1) The choice points (including their sets of choices) encountered during the execution are stored into the stack as one static data structure of the BPM system. The stack is built and updated dur-
1090
Table 1. Algorithm 1. Execution flow of “jump” Func buildStack() { if number of eligible plans > 1then save plan choices into choice point stack; } Func evalAndTry() { fori = n; i > 0; i--do
Scp ( j ) do i ( j ) ) then cp
foreachchoice point ifisApplicable( S continue at
i
Scp ( j ) ; i
compensate previous work after
cpi ;
remove cp , cp , , cp from stack; n n −1 i +1 break; end if end foreach end for } Func backOneStep() { randomly select
Scp (r ) ; n
add preconditions of }
Scp (r ) n
as new subgoal;
Func jump() { ifsystem execution failsthen ifevalAndTry() failsthen backOneStep(); end if end if }
ing the execution in accordance with the traversal along its execution tree structure. As the system
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
traverses down the execution tree, it pushes the choice point it meets into the stack. When there is a failure preventing forward execution, the execution jumps back to a previous choice point. Meanwhile, all choice points coming after the selected one are removed from the stack altogether. Note that the selected choice point becomes the top element of the stack to indicate the current execution flow. These operations guarantee the correctness of the following theorem. Theorem 1. If a choice point cpi appears in the stack, all and only its ancestor choice points are present in the stack at the same time. Proof. Let cpa be the ancestor choice point of cpi . Only two possibilities can result in that cpa is not present in the stack: cpa has not been visited at all or cpa has been jumped by. In the first case, cpi should also not be visited, and in the latter case cpi should also be jumped by. However, this is conflict to the fact that cpi is in the stack. Therefore, all ancestor choice points of cpi are in the stack. Let cp j be a choice point in the stack which is not the ancestor of cpi , then cp j can only either be a descendant of cpi , or have no relationship with cpi at all. In the first case that cp j is the descendant of cpi , cpi can not be the top element of the stack because cp j should be visited later than cpi . In the second case, cp j is not on the path from the root to cpi . In other words, cp j can not be visited since it is not reachable in the execution towards cpi , thus being unable to be in the stack. Therefore, only ancestor choice points of the top element are in the stack. buildStack() is invoked by the plan selector in Figure 2 to build up the choice point stack for the “jump” process. evalAndTry() is the core method which searches through the stack to find a qualified plan to run in case of exception. This method utilizes event-driven feature of BDI agents. During this part, compensation to a failed branch may be carried out in parallel with or after the execution of
a new selected branch. backOneStep() randomly selects a branch at the last choice point and creates new subgoals to achieve its preconditions if “jump” can not find an appropriate substitutable plan. This step relies on the means-end reasoning ability of BDI agents, and suspends the forward execution of the agent until compensation or other measures are taken. The compensation process generally comes from two methods. One is to compensate back step by step like sagas (Hector & Kenneth, 1987); another is to generate a compensation plan for all the backtracked steps altogether (Eiter, Erdem, & Faber, 2004). With the choice point stack, the method jump() can be added into the system deliberation cycle (Figure 2) as a general exception handling mechanism. For the failed branch, its compensation may be carried out in parallel or after the execution of its alternatives. Thus, the result of the compensation may falsify the validity of the plan chosen in “jump” process. However, if the compensation does not overcorrect, but only partially or completely recover existing side effects, the order of execution will not affect the final result. Non-over-correction is denoted as comp(r ) £ invl (r ) where invl (r ) represents the amount of involved resource r in normal execution and comp(r ) is the reversal of r in the corresponding compensation. For example, if there is a refund for flight booking, the passenger will not pay more fines than the airfare, nor the air company refund more than the airfare. Theorem 2. The result of compensation to the failed branch ( Scp ( f ) ) will not affect the validity of the execution of the new selected branch ( Scp (s ) ) if the compensation has the feature of non-over-correction. Proof. If the compensation occurs before any other progress is made, it can not invalidate the latter “jump” process. For the compensation to occur after “jump”, the initial resources r and the mini-
1091
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
mum requirement rmin for Scp ( f ) and Scp (s ) satisfy r ³ rmin . Otherwise, they cannot be saved as choice point options in the stack. If Scp ( f ) consumes resources cons(r ) , the selection of Scp (s ) means r − cons(r ) ≥ rmin . Thus, the compensation is unable to invalidate the execution of Scp (s ) after releasing some resources back. Conversely, if Scp ( f ) produces resources prod (r ) , we get r + prod (r ) − comp(r ) ≥ r ≥ rmin under the feature of non-over-correction after the compensation. Then the precondition of Scp (s ) is still guaranteed.
Relation to Existing Approaches There are three primary possible strategies for recovery: replanning, which discards previous work; plan patching, which continues the current execution plan after repairing the cause of failure; and backtracking, which tries all alternatives in a depth-first searching manner. Replanning should be the last resort because it is expensive to deploy. Plan patching is the most preferable approach; however, it is usually not available because of uncontrollability of the environment. Backtracking remains as the most feasible strategy which can be carried out systematically and protects existing work as much as possible. Moreover, it can be mapped to nested transaction structures and fulfilled by compensation. Our model is essentially a variant of backtracking strategy to support and enable flexible recovery. However, we do not follow the standard backtracking journey of compensate(nearestCP ) - try() where compensate(nearestCP ) compensates the works done after the nearest choice point, and try() will select a different branch to run. Here, “–” denotes sequential execution while “||” denotes parallel. Instead, we argue that achieving the goal has higher priority than doing compensation. Thus,
1092
we introduce the operation evalAndTry(), which evaluates each choice point to find if there is a directly applicable plan. If one is found, it is selected for execution without waiting for the completion of compensation. So, the backtracking in our approach follows the journey of evalAndTry() || compensate(). If this procedure fails to find an applicable plan from any branch of the choice point stack, the execution pattern is converted to compensate(nearestCP ) || try() . The compensation must complete before try() if and only if the previous execution has consumed too much resources to allow continuation. Our approach will have higher throughput because compensation and the substitutable plan can execute in parallel, and goal achievement becomes more efficient. The approach implicitly makes the assumption that certain domain knowledge, such as goal preconditions and action effects, can be modeled with sufficient accuracy for such decisions. If this were not the case in some contexts, compensation can make subsequent actions more robust, since it helps avoid interactions that are not well modeled. Thus doing the compensation first can be adopted as the default handler to make the system more robust. In fact, we can make some modifications to the last two functions of Algorithm 1 to simulate different backtracking strategies. For example, if evalAndTry() is constrained so that it will not return a directly executable plan, our approach becomes very similar to the standard one. In case no eligible plans are found from the choice point stack, the system will make the replan from the existing situation to continue the execution, as described in the function backOneStep(). Compared with merely replanning, our approach uses the previous plan as a guide for further planning; therefore tries to keep finished work as much as possible to be valid. Our method is more general than the conservative backtracking when plans in different tree branches become more and more independent. The introduction of a transaction manager also
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
frees programmers from considering low-level error-prone details of concurrency control. (Eiter, Erdem, & Faber, 2004) describes a similar method which recovers from execution problems by backtracking to a past nondeterministic choice point, from which the system tries to “repair” the causes of failure and then continues. However, their aim is to generate a reverse plan to compensate back to a previous point and retry from there, which also follows the compensate-then-retry pattern. Their approach can be adopted to generate a compensation plan for the failed execution path.
transactional Execution The open nested transaction model will be integrated into the BPM execution platform to provide system level support of automatic recovery and concurrency control. The proposed platform brings the following benefits: •
•
•
It guarantees modularity and failure atomicity of plans. Thus, programmers only need to deal with two situations: the plan succeeds completely or fails without side effects. For example, if renting a yacht fails, the system will return back to the choice point of Entertainment with the details about the incomplete deal with the yacht company abstracted out. It enforces automation of exception handling and execution resumption. For example, after returning back to the choice point of Entertainment automatically, the system can retry renting again or choose to visit Disneyland instead. The nested structure of transactions also matches the tree structure of plan composition elegantly. It provides inter- and intratask level support for distribution and concurrency control.
The flexible backtracking recovery strategy introduced in this chapter also helps to avoid the
drawbacks of applying open nested transaction directly into BPM systems, such as the requirement of a rigid and static execution structure. A plan may contain complex internal structure with respect to the goal decomposition, but it is its interfaces, not its implementation details, that are the concerns of the user. The internal execution of the plan is a black box for its caller. Definition 4. A plan is an atomic unit of work from the observer’s point of view. Before the plan p is performed, it is in the start state S 0 , and after it finishes, it is in the end state Sn p denoted as S 0 → Sn . Figure 5 shows the interface of a plan. Plan p and its state changes are constrained by enabling conditions (EC), invariants (I) and termination conditions (TC). p can begin to execute if and only if its enabling conditions are satisfied, that is S 0 ⊨ EC Ù I where ⊨ means satisfying. The change from S 0 to Sn is consistent if the invariants of the plan are satisfied. The termination conditions specify the desired outcome, or correctness criteria, of executing p. It is also possible that exogenous events, unanticipated interactions between agents, or non-deterministic action results may cause plan to be aborted when invariants are violated. The possible outcomes of p are: • • •
•
Consistent if Sn satisfies I, denoted as I (Sn ) Þ Sn ⊨ I . Inconsistent if Sn dissatisfies I, denoted as ¬I (S n ) ⇒ Sn ⊭ I . Correct if Sn is consistent and satisfies TC, denoted as TC (Sn ) Þ Sn ⊨ TC ∧ I ⊆ TC . Consistent but not correct if Sn is consistent but not satisfies TC, denoted as ¬TC (Sn ) ∧ I (Sn ) ⇒ (Sn ⊭ TC ) Ù (Sn ⊨ I ) ∧ I ⊂ TC .
Two kinds of plan models, an atomic plan and a composite plan, have been discussed above, they
1093
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
Figure 5. Interface of a plan
map to the same underlying model. Whether a plan is viewed as atomic, or composed of other plans, depends upon the abstraction level at which it is considered. If we need to access a plan’s internal organization, we model it as compositional. If we just need to use its functionality as a building block, we model it as an atomic plan. To avoid conflicts when accessing the shared resources such as bank account or available hotel rooms, traditional lock mechanisms are applied. However, it is a requirement that the lock is obtained only when the resource is visited and released as soon as possible after the visit. Otherwise, it is likely to cause the problem of long-running transactions which result in low efficiency and even deadlock. As long as the system is consistent, cascading rollback is avoided.
IMPLEMENtAtION AND EXPErIMENts We have built a prototype of reactive workflow using 3APL-M (Koch, 2005) and JBoss Transaction Service (formerly Arjuna Transaction Service) (JBoss Inc., 2006) to simulate the travel arrangement example. 3APL-M is a lightweight version of 3APL (Dastani et al., 2003) and distributed under the GNU GPL. Its source code remains much simpler by leaving out supplementary components of 3APL, such as integrated development environment. It behaves as a programming library whose API allows a Java application to call 3APL logic and deliberation structures, and vice-versa. Be-
1094
cause it is fully integrated with the Java platform and programming environments, 3APL-M is a good prototyping tool for cognitive agents. Further, programs written in 3APL-M can be easily migrated into 3APL because they share similar underlying language concepts and programming constructs. JBoss Transaction Service is employed as the transactional execution manager, which guarantees the isolation of parallel plans. Its TxCore transaction engine supports both closed and open nested transaction and presents programmers with a toolkit of Java classes which can be invoked by 3APL-M agents directly to obtain desired properties, such as persistence and concurrency control. As shown in Figure 3, a root goal of the system is decomposed into subgoals recursively, resulting in a tree structure. Each subgoal is achieved through a transaction-like function as shown in Algorithm 2. They are then organized in an open nested transaction structure. Whenever an exception occurs, the execution of that transaction is taken over by the function jump() shown in Algorithm 1. jump() will fork two threads: one is used to continue the execution at an earlier choice point, and the other is used to compensate the failed plan. (Table 2) Because the rollback operation of traditional database transactions is usually not applicable in dynamic and open environments, it is not invoked in the prototype. Instead, if jump() succeeds, the failed task is allowed to commit, and its side-effects will be undone by its counterpart compensating transaction. The “jump” algorithm is opportunistic, as it makes the assumption that the failure of one
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
Table 2. Algorithm 2. Pseudo-code for the travel agent Func transactionFlightBooking() { beginTrans(); sequence of actions for flight booking; ifaction failsthen system.jump(); commitTrans(); } Func transactionHotelBooking() { beginTrans(); sequence of actions for hotel booking; ifaction failsthen system.jump(); commitTrans(); } …… Func TravelSystem.run() { beginTrans(); switchsystem.select(Destination)do caseCalifornia transactionFlightBooking(); transactionHotelBooking(); break; … ... end switch switchsystem.select(Entertainment)do caseSailing transactionYachtRenting(); break; caseDisneyland transactionTicketBuying(); break; end switch commitTrans(); }
path of execution tree will not block others. In the worst case, where a failure state always holds resources required by other actions, the cost of maintaining and iterating the choice point stack increases. However, as different execution paths increase in independence from each other, the performance is improved.
FUtUrE trENDs The shared theme of transactions, workflows, and multi-agents is to ensure the system correctness by executing concurrent tasks cooperatively towards their goals during normal execution, and to guarantee the system reliability by repairing affected participating tasks coordinately to reach a semantically correct state when exceptions occur. As shown in Figure 6, a workflow system evolves into a multi-agent system when its operating environment becomes more and more complex and dynamic, and condenses into a database transaction when it obtains full control to the running environment. As these three domains are closely related, more work is required to explore and identify the correlation among them. In fact, the boundary and criteria to classify real applications into one of the three types of systems are not clear.
Figure 6. Relations among transactions, workflows, and multi-agent systems
1095
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
To build a robust and reliable BPM system working in complex and dynamic environment, we have to assimilate the key concepts from transactions for structured concurrency and recovery management, and from multi-agents for adaptability and reactivity. Due to the growing complexity and unpredictability, transactions as well as their extensions can not be straightforwardly adopted as a satisfactory solution for systems operating in dynamic and open environment. However, they still provide some invaluable concepts and features (e.g., failure atomicity, concurrency control, nested structure, compensation, and forward recovery) which can be used as components of the BPM system, especially for the problem of error handling and recovery. We can integrate these essential concepts into the sound features, such as autonomy and reactivity, brought by agent systems, thus designing a flexible and well-organized BPM recovery structure from a programming and software engineering perspective. This chapter proposes an innovative backtracking mechanism utilizing the concept and features of BDI agent, and integrating open nested transactions to support its execution. A more general expression of our algorithm can be formalized to incorporate the full range of exception handling strategies between the two extreme ends of replanning and rigid step-by-step backtracking. Our work does not go deep into the issue of distributed computation which is also essential for building BPM systems operating in complex and open environments. As agents are naturally distributed entities, multi-agent concepts can be further applied to benefit BPM systems to cope with a wide range of internal and external interactions and changes. As a result, our multiple-step backtracking mechanism needs to be further extended to embrace the distributed execution of BPM tasks. This is our future work.
1096
cONcLUsION This chapter has described a multiple-step backtracking mechanism to maintaining a tradeoff between replanning and rigid backtracking for exception handling and recovery, thus not suffering from the drawbacks of rigid backtracking. Our model encapsulated the business process execution in an open nested transaction to inherit the benefits of concurrency control and distribution management of participating plans. Our approach mainly contained three parts. We first applied BDI agent model to design and construct BPM systems to operate in complex and dynamic environments with the advantages of adaptability and flexibility. Then we introduced the “jump” approach for exception handling and recovery which utilizes the beneficial features of event-driven and means-end reasoning of BDI agents. Finally, we used open nested transaction model to encapsulate plan execution and backtracking to gain the system level support of concurrency control and automatic recovery. The nested tree structure was applied to depict the BPM system execution. During the construction of the tree structure along with the system execution, the choice points were stored and maintained in a stack. By iterating the stack, the system can find and execute a suitable plan from previously applicable ones to achieve its goal as soon as an exception occurs. This “jump” procedure struck a balance between complete replanning and rigid step-by-step backtracking after exceptions occur, by utilizing previous planning results in determining response to failure. Because the substitutable path is allowed to start prior to or in parallel with the compensation process, the system can achieve its goals more directly with higher efficiency. Our approach also frees system programmers from considering low-level details of concurrency control and exception handling, because transactional execution automates these issues. Combining and utilizing several beneficial fea-
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
tures of BDI agents, the open nested transaction model is tightly integrated into the BPM system. Both BDI data structures and the deliberation cycle are leveraged to maximize the functionality of transaction management.
JBoss Inc. (2006). JBoss transactions 4.2.2 transaction core programmers guide. Retrieved May 7, 2008, from JBoss Inc.: http://labs.jboss.com/ jbosstm/docs/4.2.2/manuals/pdf/core/ProgrammersGuide.pdf
rEFErENcEs
Jennings, N. R. (2001). An agent-based approach for building complex software systems. Communications of the ACM, 44(4), 35–41. doi:10.1145/367211.367250
d’Inverno, M., Luck, M., Georgeff, M. P., Kinny, D., & Wooldridge, M. (2004). The dMARS architecture: A specification of the distributed multiagent reasoning system. Journal of Autonomous Agents and Multi-Agent Systems, 9(1-2), 5–53. doi:10.1023/B:AGNT.0000019688.11109.19
Kamathy, M., & Ramamritham, K. (1996). Correctness issues in workflow management. Distributed Systems Engineering, 3, 213–221. doi:10.1088/0967-1846/3/4/002
Dastani, M., Riemsdijk, B. v., Dignum, F., & Meyer, J.-J. C. (2003). A programming language for cognitive agents goal directed 3APL. In Programming Multi-Agent Systems (pp. 111-130). Springer Verlag.
Klein, M., Rodriguez-Aguilar, J. A., & Dellarocas, C. (2003). Using domain-independent exception handling services to enable robust open multiagent systems: The case of agent death. Autonomous Agents and Multi-Agent Systems, 7(1-2), 179–189. doi:10.1023/A:1024145408578
Ehrler, L., Fleurke, M. K., Purvis, M., & Savarimuthu, B. T. R. (2006). Agent-based workflow management systems (WfMSs). Information Systems and E-Business Management, 4(1), 5–23. doi:10.1007/s10257-005-0010-9
Koch, F. (2005). 3APL-M: Platform for lightweight deliberative agents. Retrieved 7 May, 2008, from http://www.cs.uu.nl/3apl-m/docs/3aplmmanual.pdf
Eiter, T., Erdem, E., & Faber, W. (2004). Plan reversals for recovery in execution monitoring. In 10th International Workshop on Non-Monotonic Reasoning (pp. 147-154). Georgakopoulos, D., Hornick, M. F., & Manola, F. (1996). Customizing transaction models and mechanisms in a programmable environment supporting reliable workflow automation. IEEE Transactions on Knowledge and Data Engineering, 8(4), 630–649. doi:10.1109/69.536255 Gray, J., & Reuter, A. (1993). Transaction processing: Concepts and techniques. San Francisco: Morgan Kaufmann.
Nagi, K. (2001). Transactional agents: Towards a robust multi-agent system. Berlin, Heidelberg: Springer-Verlag. Ramamohanarao, K., Bailey, J., & Busetta, P. (2001). Transaction oriented computational models for multi-agent systems. In Proceedings of 13th IEEE International Conference on Tools with Artificial Intelligence (ICTAI) (pp. 11-17). IEEE Computer Society, Washington, DC, USA. Rao, A. S., & Georgeff, M. P. (1995, June 12-14, 1995). BDI agents: From theory to practice. In Proceedings of the First International Conference on Multiagent Systems (pp. 312-319), San Francisco, California, USA.
Hector, G.-M., & Kenneth, S. (1987). Sagas. SIGMOD Record, 16(3), 249–259. doi:10.1145/38714.38742
1097
Multiple-Step Backtracking of Exception Handling in Autonomous Business Process Management
Souchon, F., Dony, C., Urtado, C., & Vauttier, S. (2004). Improving exception handling in multiagent systems. In Software Engineering for MultiAgent Systems II (Lecture Notes in Computer Science Vol. 2940) (pp. 167-188). Springer Verlag. Tripathi, A., & Miller, R. (2001). Exception handling in agent-oriented systems. In Advances in exception handling techniques (LNCS 2022, pp. 128-146). Unruh, A., Bailey, J., & Ramamohanarao, K. (2004). Managing semantic compensation in a multi-agent system. In International Conference on Cooperative Information Systems (LNCS 3290). Weikum, G., & Schek, H.-J. (1992). Concepts and applications of multilevel transactions and open nested transactions. In Database transaction models for advanced applications (pp. 515-553). San Francisco: Morgan Kaufmann Publishers, Inc. Woolridge, M. (2001). Introduction to multiagent systems. John Wiley & Sons, Inc.
KEy tErMs AND DEFINItIONs BDI agent: A software entity embedded in certain environment and behaves reactively to the changing situation in order to meet its design
objectives. It has four key data structures: beliefs, desires, intentions and plans. Choice Point: For a certain goal of the system, if plan matcher finds more than one eligible plan, the corresponding node in the execution tree is marked as a choice point. Choice Point Stack: A choice point stack contains the choice points the system has met chronologically. Consistent: The plan execution is consistent if it terminates at state Sn in which invariants are satisfied. Correct: The plan execution is correct if it terminates at state Sn in which both invariants and termination conditions are satisfied. Jump: A jump is a continuation of the system execution flow at another selectable plan implied by elements of its choice point stack. Non-Over-Correction: The compensation process only partially or completely recovers the side effects made by the normal execution. Let invl (r ) represent the amount of involved resource r in normal execution and comp(r ) is the reversal of r in the corresponding compensation, then comp(r ) £ invl (r ) . Plan: A plan is an atomic unit of work from the observer’s point of view. It is constrained by enabling conditions (EC), invariants (I) and termination conditions (TC).
This work was previously published in Handbook of Research on Complex Dynamic Process Management: Techniques for Adaptability in Turbulent Environments, edited by Minhong Wang and Zhaohao Sun, pp. 251-270, copyright 2010 by Information Science Reference (an imprint of IGI Global).
1098
1099
Chapter 4.14
A Resource-Based Perspective on Information Technology, Knowledge Management, and Firm Performance Clyde W. Holsapple University of Kentucky, USA Jiming Wu California State University–East Bay, USA
AbstrAct The resource-based view of the firm attributes superior firm performance to organizational resources that are valuable, rare, non-substitutable, and difficult to imitate. Aligned with this view, the authors contend that both information technology (IT) and knowledge management (KM) comprise critical organizational resources that contribute to superior firm performance. The authors also examine the relationship between IT and KM, and develop a new second-order variable – IT-KM competence – with IT capability and KM performance as its formative indicators. Thus, this chapter contributes not only by investigating the determinants of firm performance DOI: 10.4018/978-1-60566-659-4.ch016
but also by broadening our understanding of the relationships among IT, KM, and firm performance.
INtrODUctION For the last two decades, the investigation of the return on investments in IT has become a key objective of many studies. In pursuing this objective, researchers have developed two main theoretical frameworks: one asserts that IT has a direct impact on firm performance (Bharadwaj, 2000), while the other proposes that the effect of IT on firm performance is mediated by business process (Tanriverdi, 2005). However, no matter which theoretical framework has been employed, some studies have failed to find a significant correlation
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Resource-Based Perspective on IT, KM, and Firm Performance
between IT and firm performance. Because the return on IT investments seems to be contingent, scholars call for more research into why IT may not benefit business, how to make IT effective, and what are the key determinants of the success of IT (Dehning & Richardson, 2002). Meanwhile, considerable research attention has been devoted to the importance of KM in the rapidly changing, competitive, and dynamic business environment (Holsapple & Wu, 2008). Modern organizations are turning to KM practices and applications to foster the creation, integration, and usage of knowledge assets that enable them to compete in an increasingly global economy. In light of this, researchers have attempted to provide empirical evidence of the strategic consequences that KM can bring to organizations (Grant, 1996). For example, based on the survey data collected from 177 firms, Chuang (2004) finds that greater KM capabilities are significantly associated with greater competitiveness and that social KM resource has a significant impact on competitive advantage. Similarly, in a survey-based investigation of the link between KM activities and competitiveness, Holsapple and Singh (2005) observe that the KM activities of interest can be performed in ways that improve organizational competitiveness, and can do so in each/all of four ways: enhanced productivity, agility, innovation, and reputation. Although there exist studies on IT-firm performance relationship and on KM-firm performance link, these studies have paid insufficient attention to the full map of relationships among IT, KM, and firm-level return, and have placed relatively less emphasis on the collaborative effect of IT and KM on firm performance (Wu, 2008). Given the inseparability of IT and KM, and the strategic importance of the two, a thorough investigation of both their joint and separate roles in firm performance is necessary. Such investigation would enrich not only the theoretical understanding of the mechanism for competitive advantage, but also
1100
the research models investigating determinants of superior firm performance. Thus, the work would be of value not only to practitioners striving to achieve and sustain business success, but also to researchers interested in identifying determinants of better firm performance. This study contributes to such investigation. More specifically, the purpose of this chapter is to theorize a triangle of relationships among IT, KM, and firm performance, and to develop a theoretical model with testable hypotheses that improve our understanding of the effects of IT and KM on firm performance. The theoretical foundation of this paper is embedded in the resource-based view of the firm and prior work by Holsapple and his colleagues. The current study contributes to the literature in a number of ways. First, this study is among the first to recognize that KM may play an important role in the link between IT and firm performance. Thus, the study may provide a plausible explanation for why some previous research has failed to discover a significant relationship between IT and firm performance. Second, we examine the determinants of firm performance by introducing and employing a new perspective, which focuses on the collective impacts of IT and KM. Such a perspective may broaden our approach to identifying determinants of firm performance. Third, we present methods to measure relevant variables. Therefore, the current chapter is useful and effective in guiding future empirical research in this regard. Finally, this study also investigates the relationship between IT and KM, which has so far received relatively little research attention. The remainder of the chapter is organized as follows. In the next two sections, we review the state of IT and KM. Then, we present the research model and hypotheses, followed by a section in which we discuss methods for measuring the variables. Finally, we provide a brief summary of the contributions provided by this research.
A Resource-Based Perspective on IT, KM, and Firm Performance
INFOrMAtION tEcHNOLOGy Information technology can be defined as covering a broad range of technologies involved in information processing and handling, such as computer hardware, software, telecommunications, and databases (Huff & Munro, 1985). Realizing that IT enables businesses to run efficiently and profitably, organizations around the world have made tremendous investments in it. As estimated by market research organizations, world IT spending in 2000 was about $2 trillion and will reach $3.3 trillion in 2008, with an average growth rate of over 7% in these eight years (WITSA, 2000; Gartner, 2007). It is also estimated that such a growth rate will be sustained for several years after 2008 (InformationWeek, 2007). In the U.S. economy, IT spending now accounts for nearly 40% of overall expenditure on capital equipment, making it the largest line item in American firms’ budgets for capital investment (Cline & Guynes, 2001). Not surprisingly, IT spending has already accounted for approximately 4% of U.S. gross domestic product (GDP) (BusinessWeek, 2001). IT has profoundly changed the way that business gets done in nearly every industry. Using IT, organizations radically redesign their business processes to streamline and simplify operations and remain competitive in a changing environment. With the help of computer-aided design and operational systems, organizations can greatly reduce overall cost and time of developing and manufacturing its products and of providing its services. Key customer-related IT, such as customer relationship management systems, allows organizations to capture and maintain detailed information about customer interactions, thus enabling them to provide quality customer service and to increase sales. As a specific category of IT serving middle-level managers, management information systems summarize and report on a company’s basic operations using transactionlevel data, and thus help with monitoring, controlling, and decision-making activities (Laudon &
Laudon, 2006). Today in the U.S., more than 23 million managers and 113 million workers in the labor force rely on information systems to conduct day-to-day business and to achieve strategic business objectives (Laudon & Laudon, 2006). Along with the rapid growth and development of IT, the role of IT in business has greatly expanded, ranging from simple back-office functions to enabler of business process reengineering and key driver of competitive advantage. Until the 1960s, IT had played a very simple role in business operation: transaction processing, record-keeping, accounting, and other data processing activities; by the late 1970s, the major role of IT began to shift toward providing managerial end users with ad hoc and interactive support of their decision-making processes; in the 1980s-1990s, IT was mainly employed to support end users to use their own computing resources for job requirements and to assist top executives in easily accessing critical information in their preferred formats; now the primary role of IT is to help develop fast, reliable, and secure Internet, on which e-commerce and Web-enabled enterprise are based (Laudon & Laudon, 2006). Figure 1 shows the expanding role of IT in business and organizational management. Because IT is an area of rapid change and growth, it is important and necessary for organizations and individuals to continually adapt and develop new skills and knowledge.
KNOWLEDGE MANAGEMENt Knowledge refers to a fluid mix of framed experience, values, contextual information, and expert insight that offers a framework for interpreting, assimilating, and integrating new experiences and information (Davenport & Prusak, 1998). Knowledge is highly human-related. More specifically, it is originated from and applied in brains of human beings. From one perspective, knowledge is a product of human reflection and experience emphasizing understanding and sense making (why
1101
A Resource-Based Perspective on IT, KM, and Firm Performance
Figure 1. The expanding role of IT in business (O’Brien & Marakas, 2007)
and how), while information can be considered as a message focusing on the awareness of something (who and what) (Bennet & Bennet, 2003). Others contend that modern computer technology can also make sense of situations, learn from its experiences, and derive/discover new knowledge – in addition to message handling (Holsapple 2005). In this vein, knowledge is something that is conveyed in representations (e.g., linguistic, symbolic, digital, mental, behavioral, material patterns) that are usable to some processor (e.g., human mind) and can be categorized as being descriptive (characterizations of the state of some system – who, what, when, etc.), procedural (characterizations of how to do something), or reasoning (characterizations of logic or causality). In this view, information is one gradation of descriptive knowledge, but it can be operated on by other types of knowledge (i.e., procedures and logic). Holsapple and Joshi (2004, p. 593) define knowledge management as “an entity’s systematic and deliberate efforts to expand, cultivate, and apply available knowledge in ways that add value to the entity, in the sense of positive results in accomplishing its objectives or fulfilling its purpose.” Thus, KM involves any activities of
1102
generating new knowledge through derivation or discovery, acquiring valuable knowledge from outside sources, selecting needed knowledge from internal sources, altering the state of knowledge resources, and embedding knowledge into organizational outputs (Holsapple & Joshi, 2004). KM is becoming increasingly important and prevalent for many reasons. To succeed in today’s dynamic global economy, organizations must reduce their cycle times in production, operate with minimum fixed assets and costs, shorten product development time, improve customer service and product quality, enhance employee productivity and performance, provide innovative products and services, modernize and reengineer business process, and increase agility and flexibility (Gupta et al., 2004). All these critical business activities require continued efforts to acquire, create, document, share, and apply knowledge by employees and teams at all organizational levels. Because of the importance of KM to success, organizations have invested heavily in it. According to IDC, global business spending on KM was rising from $2.7 billion in 2002 to $4.8 billion in 2007 (Babcock, 2004). The company also estimated that in the United States, KM spending reached $1.4
A Resource-Based Perspective on IT, KM, and Firm Performance
billion in 2001 and $2.9 billion in 2006, exhibiting an average annual growth rate of over 20% in these five years (Motsenigos & Young, 2002). KM has also attracted tremendous attention from researchers. Figure 2 exhibits the trend of publications for KM as tracked by Google Scholar from 1995 to 2007. For each year, the number of publications referring to “knowledge management” is shown. Such publications were 513 in 1995, or about 10 per week, and exponentially increased to 12,600 in 2005, or about 243 per week. This indicates that two weeks’ publications in 2005 are almost equal to whole year’s publications in 1995 and that the average annual growth rate in these 10 years is astonishing – 236%! To put this KM trend in perspective, we compare it with the traditional business discipline of operations management (OM), for which Google Scholar reports 1,190 publications in 1995, ramping up to 3,760 in 2005. Figure 2 also shows the trend of publications for OM. One important research stream in this field focuses on the KM ontology that offers a comprehensive understanding of KM phenomena (Holsapple & Joshi, 2004). While specifying the conceptualization of the KM domain, the
study recognizes three categories of KM influences: managerial, resource, and environmental. The study also identifies five major knowledge manipulation activities: acquisition, selection, generation, assimilation, and emission, as well as four major managerial activities that constituted the managerial influences: leadership, coordination, control, and measurement.
rEsEArcH MODEL AND HyPOtHEsEs Drawing on the resource-based view (RBV) of the firm and prior empirical findings, we introduce a conceptual model positing that IT and KM both play an important role in predicting firm performance, and that KM performance is highly related to IT capability. As depicted in Figure 3, the model includes a new variable – IT-KM Competence, which is also conceptualized as a key antecedent of firm performance. Below we describe and discuss the new variable and the conceptual links in the research model.
Figure 2. Publication trends for knowledge management and operations management. Source: Google Scholar, March 24, 2008
1103
A Resource-Based Perspective on IT, KM, and Firm Performance
Figure 3. A conceptual model of IT, KM, and firm performance
It capability and KM Performance In the last decade, more and more researchers have adopted the notion that IT plays a critical role in shaping firms’ efforts for KM. For example, in a study examining the link between KM and computer-based technology, Holsapple (2005) argues that IT is of great importance not only for enabling or facilitating the knowledge flows among knowledge processors (human or computer-based) but also for assisting in the measurement, control, coordination, and leadership of knowledge and the knowledge processors. Thus, he asserts that modern KM is inseparable from a consideration of IT. Similarly, in a study investigating the relationships among IT, KM, and firm performance, Tanriverdi (2005) argues that an IT-based coordination mechanism can increase the reach and richness of a firm’s knowledge resources, and enable business units of the firm to learn about knowledge sharing opportunities with each other. Thus, he posits that IT relatedness, which is defined as “the use of common IT infrastructures and common IT management processes across business units,” (p. 317) is positively associated with KM capability. Not surprisingly, previous research also suggests that IT plays an important role in supporting and enhancing aforementioned KM activities: acquisition, selection, generation, assimilation, and emission (Holsapple & Singh, 2003; Jones,
1104
2004). In performing knowledge acquisition activities, an IT-based network system can assist in identifying, evaluating, analyzing, and qualifying external knowledge that needs to be acquired to support the firm’s growth (Holsapple & Singh, 2003). An IT-based knowledge selection system can help a firm be more efficient and effective in the process of knowledge selection. For example, Buckman Laboratories uses K’Netix, an IT-based knowledge selection system, to locate, collect, select, and package appropriate knowledge received from 11 resources and transfer it to the person requesting the knowledge (Holsapple & Singh, 2003). In the activity of knowledge generation, a decision support system may draw on databases and text-bases, plus banks of solvers and/or rule sets to derive knowledge in the sense of expectations, explanations, evaluations, solutions, recommendations, and so forth (Bonczek et al., 1981; Holsapple & Whinston, 1996). In addition, such systems can also help in other knowledge generation activities such as data mining, text mining, and sense-making (Jones, 2004). In the activities of knowledge assimilation, an IT-based organizational memory system can help in modeling, representing, and archiving knowledge, while an IT-based less structured repository (e.g., discussion database and lessons-learned system) can be used to store insights and observations (Jones, 2004). Finally, in the process of knowledge emission, IT-based systems can support users in sharing and transferring knowledge quickly and cost-efficiently. For instance, to enhance knowledge sharing among employees in geographically dispersed locations, Honda has established a full-service international communications network system (called Pentaccord) and a system to manage selected databases (sales, finance, and part ordering) on a global basis (Holsapple & Singh, 2003). In short, the current literature suggests that KM performance is of particular relevance to IT. Thus, we hypothesize:
A Resource-Based Perspective on IT, KM, and Firm Performance
H1:IT capability is positively related to KM performance. Here, IT capability refers to an organization’s ability to identify IT that meets business needs, to deploy IT to improve business process in a cost-effective manner, and to provide long-term maintenance and support for IT-based systems (Karimi et al., 2007). By KM performance, this chapter means the degree to which KM activities harness organizational resources to achieve the goals or purposes of KM initiatives (Wu, 2008). Through linking IT capability to KM performance, the first hypothesis highlights that KM and IT are inseparable, and can play a communal and collective role in an organization.
the resource-based View of the Firm Rooted in management strategy literature, the RBV of the firm is developed to understand why firms are able to gain and sustain a competitive advantage (Newbert, 2007). RBV states that a firm’s performance is mainly determined by a unique set of firm resources that are valuable, rare, non-substitutable, and difficult to imitate. RBV indicates that such resources are often rentyielding and likely to survive competitive imitation when protected by isolating mechanisms such as resource connectedness, historical uniqueness, and causal ambiguity (Barney, 1991). In short, RBV addresses firm performance differences by using resource asymmetry. That is, the resources needed to achieve strategic business objectives are heterogeneously distributed across firms, and thus are posited to account for the differences in firm performance (Grant, 1991). Based on the RBV, a resource can be defined as a rare and inimitable firm-specific asset that adds value to firms’ operations by enabling them to implement strategies that improve efficiency and effectiveness (Wade & Hulland, 2004). Advocates of RBV tend to characterize resources
broadly – including financial capital, physical assets, knowledge, brand image, IT, organizational processes, and so forth (Bharadwaj, 2000). Thus, a resource is an observable but not necessarily tangible asset that can be independently managed, appraised, and even valued (Karmi et al., 2007). RBV suggests that a resource held by a majority of competing firms (i.e., a non-rare resource) may not explain firm performance differences (Newbert, 2007). It also suggests that if a resource held by just a few competing firms is not costly to imitate, the resource is likely to be quickly obtained by competitors, and thus may not explain differences in firm performance, either (Ray et al., 2005).
It capability and Firm Performance As an important firm resource, IT capability plays a key role in firm performance. IT capability enables organizations to design innovative products and services, and to reduce the overall cost and time of developing the products and providing the services. For instance, IT giant Apple developed an innovative product – iPod, which has dominated digital music player sales in the United States and brings the company new sales records and great business success. Continuing to innovate, the company recently released iPhone, an Internet-enabled multimedia mobile phone. Computer-aided design (CAD) systems assist Toyota’s designers to create and modify their product specifications much faster than before, and thus achieve cost efficiency. CAD allows a designer to see his or her ideas as they take shape on a monitor display, in addition to clay models. Taking the advantage of CAD, Toyota designs quality into its products. IT capability is the primary driver of business process reengineering, which integrates a strategy of promoting business innovation with a strategy of making major improvements to business process so that a company can gain and sustain competitiveness (O’Brien & Marakas, 2007). The computation capability, information processing
1105
A Resource-Based Perspective on IT, KM, and Firm Performance
speed, and connectivity of computers and Internet technologies can considerably enhance the efficiency of a business process, as well as communications and collaboration among the people responsible for its management, implementation, and maintenance (Wade & Hulland, 2004). For example, many Fortune 500 companies count on enterprise resource planning (ERP) systems to reengineer, automate, and integrate their marketing, manufacturing, sales, distribution, finance, and human resource business processes (O’Brien & Marakas, 2007). IT capability used for production and operations can improve performance of companies that must plan, monitor, and control inventories, facilities, and the flow of products and services. Many manufacturing and production systems can efficiently deal with the operation and maintenance of production facilities; the establishment of production goals; the acquisition, storage, and distribution of production materials; and the scheduling of equipment, facilities, materials, and labor required to fulfill an order (Laudon & Laudon, 2006). Thus, computer-integrated (or –aided) manufacturing enables organizations to reduce the cost and time of producing goods by simplifying, automating, and integrating all production and support processes (O’Brien & Marakas, 2007). Moreover, such manufacturing helps companies achieve highest product quality by bridging the gap between the conceptual design and the manufacturing of finished goods. IT capability is the key to electronic commerce, which refers to the use of digital technology and the Internet to execute major business processes of developing, marketing, selling, delivering, servicing, and paying for products and services (Laudon & Laudon, 2006; O’Brien & Marakas, 2007). Electronic commerce is transforming firms’ relationships with customers, employees, suppliers, distributors, retailers, and partners into digital relationships using networks and the Internet (Laudon & Laudon, 2006; Wheeler, 2002). More important, it can dramatically improve firm
1106
performance by allowing companies to achieve six major business values: (1) generate new revenue from online sales, (2) reduce costs via online transaction, (3) attract new customers through online marketing and advertising, (4) increase customer loyalty and retention by providing Web-based customer service and support, (5) develop new Web-based markets and distribution channels, and (6) develop and sell digital goods such as music track, video stream, online game, and flight ticket (O’Brien & Marakas, 2007; Wheeler, 2002). IT capability can be a key enabler of superior firm performance by improving communication and collaboration within an organization. IT, especially network technologies, provides basic infrastructure and platform for communication, coordination, and collaboration among the members of business teams and workgroups. In other words, such IT capability enables employees and/ or managers at all levels to work together more easily and effectively by helping them share information with each other, coordinate their individual efforts and use of resources, and work together cooperatively on joint projects and assignments (O’Brien & Marakas, 2007). For example, knowledge experts, technicians, computer specialists, and R&D engineers may form a virtual team for a KM system development project. The communication, coordination, and collaboration among the team members may rely heavily on IT-based applications such as email, instant messaging, newsgroup, videoconferencing, discussion forum, and a Web-based database for convenient and immediate access to work-in-progress information. Such improved communication and collaboration can significantly increase the quality of the team work. Adopting the resource-based view of the firm, information systems researchers suggest that IT capability has an impact on firm performance. For example, Mata and colleagues (1995) point out that managerial IT skills are scarce and firm specific, and thus likely to serve as sources of sustained competitive advantage. Focusing on
A Resource-Based Perspective on IT, KM, and Firm Performance
the differential effects of various IT resources on customer service performance, Ray and colleagues (2005) argue that such factors as IT are valuable resources because they enable firms to increase the efficiency or effectiveness of business processes compared to what would be the case if these resources were not exploited. Similarly, Bharadwaj (2000) contends that organizations successful in developing superior IT capability will enjoy superior financial performance by boosting revenues and/or reducing costs. In line with the resource-based view of the firm and the literature, we therefore hypothesize: H2:IT capability is positively related to firm performance.
KM Performance and Firm Performance As mentioned earlier, the RBV indicates that knowledge is a unique company resource (Grant, 1996). Therefore, KM can also be viewed as such resource important to firm performance because it allows the firm to better leverage its knowledge. KM facilitates organizational learning, which keeps organizations in tune with trends and developments in their business, and thus helps them perform better. Here, organizational learning refers to individual learning, team learning (i.e., learning in small or large groups), or entire organization-level learning (Bennet & Bennet, 2003). All these levels of learning are necessary for an organization eager to possess the requisite knowledge to improve performance. From a KM perspective, organizational learning is critical and should be nurtured and made an integral part of KM strategy. Organizational learning also reflects an organization’s capacity to acquire or generate the knowledge necessary to survive and compete in its environment (Bennet & Bennet, 2003). KM can change an employee’s attitude toward learning and its impact on an organization’s competitive position (Wu, 2008). Such change
is likely to stimulate organizational learning because individuals and teams become to believe that learning can help their company to handle change, uncertainty, and complexity in the everchanging business environment. KM helps define and specify what should be learned, when it should be learned, and who should be learning it. KM can also create a culture of peer collaboration and open communication, both leading to a setting conducive to organizational learning. Moreover, KM activities of knowledge acquisition and generation promote organizational learning by motivating individuals to obtain new knowledge from external sources or from existing knowledge, and to make it suitable for future use. KM can improve firm performance not only by facilitating organizational learning but also by encouraging knowledge sharing. A core principle of KM is to make knowledge sharing easier and timely, and to encourage employees and managers to work together in ways that will incorporate knowledge shared among them. Consequently, one important goal of KM is to boost productivity and efficiency by building a set of methods and tools to foster appropriate flows of knowledge. For instance, to align with the strategy of possessing a platform for quick and easy knowledge sharing on global scale, Xerox developed Eureka, an intranet based communication system, in 1996 (Barth, 2000). The system is linked with a corporate database that helps service technicians share repair tips. There are more than 36,000 tips in the system which can be accessed by about 19,000 Xerox technicians via their laptop computers (Barth, 2000). The increasing importance of KM also motivates managers to develop a reward and personnel evaluation structure favoring knowledge sharing activities. Reward and punishment standards help define acceptable behavior. By incorporating desired KM behavior into annual performance evaluation, an organization may improve its own performance by encouraging such critical activities as knowledge sharing and foregoing organizational learning.
1107
A Resource-Based Perspective on IT, KM, and Firm Performance
KM can strengthen an organization’s competitive position by increasing its agility (Holsapple & Singh, 2003). In general, agility refers to an organization’s ability to detect changes, opportunities, and threats in its business environment and to provide speedy and focused responses to customers and other stakeholders by reconfiguring resources and processes and/or by developing strategic partnerships and alliances (Mathiyalakan et al., 2005). Thus, agility derives from both the physical ability to act and the intellectual ability to understand appropriate things to act upon (Dove, 2003). KM is recognized as a key success factor for agility because it enables an organization to apply effectively its knowledge of market opportunity, production process, business practice, cuttingedge technology, quality service, management skills, the extent of a threat, and so forth. In a continuously changing and unpredictable business environment, it is crucial for an organization to manage knowledge in a way to quickly absorb new knowledge, fully assimilate it, and effectively exploit it (Holsapple & Wu, 2008). Consequently, an organization with sufficient competencies in KM will be agile enough to deliver leading edge and achieve a better competitive position. KM can also improve an organization’s performance by fostering its innovation. As a subject of research and practice, innovation refers to the ability of creating valuable and useful new product, new services, new technology, or production processes (Liao & Chuang, 2006). Innovation has been recognized as a primary value creator for organizations, in both times of generating revenues and in times of cutting costs. Innovation consists of two important dimensions: magnitude, which reflects the extent or breadth of innovation, and speed, which shows an organization’s quickness to adopt an innovation, relative to its competitors (Liao & Chuang, 2006). KM plays a critical role in the ability of an organization to be innovative because KM initiatives and activities often serve as a key platform for creating new and inventive ideas that will benefit and add value to the orga-
1108
nization. More specifically, KM activities such as knowledge generation and sharing can broaden understanding of relevant issues and concepts, and push thinking beyond the constraints of presumption, narrow rationality, and traditional method. Therefore, KM can be an important organizational practice that spurs innovation. In summary, RBV suggests that a firm can outperform its competitors by taking advantage of its KM. As a unique company resource, KM plays a fundamental role in firm performance because it facilitates organizational learning, encourages knowledge sharing, increases agility, and fosters innovation. Although it is complex to acquire and difficult to leverage KM resources, firms that succeed in doing so are likely to experience learning effects whereby they improve their abilities for creating value. This directly leads to the following hypothesis: H3:KM performance is positively related to firm performance.
It-KM competence and Firm Performance IT-KM competence is defined as a firm’s IT and KM ability and resources that are peculiar to achieving and sustaining business success. The new variable is conceptualized as a composite construct with IT capability and KM performance as its two formative indicators. Such conceptualization is in line with prior research and RBV, which suggest that KM and IT are inseparable from each other and both are unique and important firm resources. Thus, the current literature supports the idea to represent IT capability and KM performance by a single composite construct that impacts firm performance. We contend that such conceptualization can push our thinking beyond current theoretical boundaries and offer a new perspective for investigating determinants of firm performance. Thus, we advance the following hypothesis:
A Resource-Based Perspective on IT, KM, and Firm Performance
H4:IT-KM competence is positively related to firm performance.
MEAsUrING tHE VArIAbLEs Firm performance can be measured in a variety of ways, including financial performance, market performance, and business process performance. Financial performance is usually evaluated by means of standard profit and cost ratios, which can be calculated by using accounting data obtained from Standard & Poor’s COMPUSTAT. A common way to assess market performance is to use Tobin’s q, which can also be calculated by using COMPUSTAT data. However, one factor researchers need to be aware of is that for private firms and not-for-profit organizations, accounting data are not readily available in COMPUSTAT. Perceived business process performance can be evaluated by using a survey questionnaire. Often, researchers can find well-developed survey instruments in the literature and adapt them for their specific needs. In addition, it is very important to address data validity and reliability issues when using survey data to test research hypotheses. Past research suggests that IT capability can be measured by IT spending/use, survey questionnaire, or results of studies conducted by public independent organizations. IT spending/use data are often available in annual corporate financial reports. InformationWeek and ComputerWorld are the two publicly available sources of data on corporate IT spending and other measures of IT use. Survey instruments for some constructs related to IT capability have already been developed and applied to practice by prior IS research such as the aforementioned study by Tanriverdi (2005). Results of independent organizations’ studies are also a very valuable source for IT capability data. For example, the IT leader study by InformationWeek may provide the data useful for measuring an organization’s capability to leverage its IT resources on a continuous basis. Past research suggests that KM performance can be measured
by survey questionnaire or results of studies conducted by public independent organizations. Tanriverdi (2005) has developed a survey instrument to assess the extent to which an organization creates, transfers, integrates, and leverages related product, customer, and managerial knowledge resources. KM performance data may also be obtained by collecting and analyzing results of relevant studies conducted by independent KM research organizations such as KMWorld (http:// www.kmworld.com), and Teleos and its KNOW Network (http://www.knowledgebusiness.com).
cONcLUsION Over the past decade, one of the most striking developments in business has been the rapid proliferation of KM. Organizations have launched KM initiatives to consolidate and reconcile knowledge assets that enable them to compete in the dynamic and changing global business environment. Therefore, in parallel to the focus on the relationship between IT and firm performance, the role of KM in firm profitability has also received considerable research attention. Drawing on the RBV of the firm, plus findings from prior research, this chapter argues that both IT capability and KM performance are primary antecedents of firm performance and that IT capability has a significant impact on KM performance. The current chapter also introduces a new composite variable – IT-KM competence – with IT capability and KM performance as its formative indicators. As a result, this chapter broadens our understanding of the relationships among IT, KM, and firm performance by (1) viewing both IT and KM as unique and important firm resources, (2) suggesting that KM can play a mediating role between IT and firm performance, and (3) proposing that IT and KM may be represented by a single composite variable, which might play a more important and effective role in predicting firm performance.
1109
A Resource-Based Perspective on IT, KM, and Firm Performance
NOtE Authors are listed alphabetically and have contributed equally to this chapter.
rEFErENcEs Babcock, P. (2004). Shedding light on knowledge management. HRMagazine, 49(5), 46–50. Barney, J. (1991). Firm resources and sustained competitive advantage. Journal of Management, 17(1), 99–120. doi:10.1177/014920639101700108 Barth, S. (2000). Eureka! Xerox has found it. Retrieved from http://kazman.shidler.hawaii.edu/ eurekacase.html Bennet, A., & Bennet, D. (2003). The partnership between organizational learning and knowledge management. In C. W. Holsapple (Ed.), Handbook on knowledge management 1: Knowledge matters (pp. 439-455). Berlin/Heidelberg, Germany: Springer-Verlag. Bharadwaj, A. S. (2000). A resource-based perspective on information technology capability and firm performance: An empirical investigation. MIS Quarterly, 24(1), 169–196. doi:10.2307/3250983 Bonczek, R., Holsapple, C., & Whinston, A. (1981). Foundations of decision support systems. New York: Academic Press. BusinessWeek. (2001). How bad will it get? Retrieved from http://www.businessweek.com/ magazine/content/ 01_11/b3723017.htm Chuang, S. (2004). A resource-based perspective on knowledge management capability and competitive advantage: An empirical investigation. Expert Systems with Applications, 27(3), 459–465. doi:10.1016/j.eswa.2004.05.008
1110
Cline, M. K., & Guynes, C. S. (2001). A study of the impact of information technology investment on firm performance. Journal of Computer Information Systems, 41(3), 15–19. Davenport, T. H., & Prusak, L. (1998). Working knowledge. Cambridge, MA: Harvard Business School Press. Dehning, B., & Richardson, V. J. (2002). Returns on investments in information technology: A research synthesis. Journal of Information Systems, 16(1), 7–30. doi:10.2308/jis.2002.16.1.7 Dove, R. (2003). Knowledge management and agility: Relationships and roles. In C. W. Holsapple (Ed.), Handbook on knowledge management 2: Knowledge directions (pp. 309-330). Berlin/ Heidelberg, Germany: Springer-Verlag. Gartner. (2007). Gartner says worldwide IT spending to surpass $3 trillion in 2007. Retrieved from http://www.gartner.com/it/page.jsp?id=529409 Grant, R. M. (1991). The resource-based theory of competitive advantage: Implications for strategy formulation. California Management Review, 33(3), 114–135. Grant, R. M. (1996). Prospering in dynamicallycompetitive environments: Organizational capability as knowledge integration. Organization Science, 7(4), 375–387. doi:10.1287/orsc.7.4.375 Gupta, J. N. D., Sharma, S. K., & Hsu, J. (2004). An overview of knowledge management. In J. N. D. Gupta, S. K. Sharma (Eds.), Creating knowledge based organizations (pp. 1-29). Hershey, PA: Idea Group Inc. Holsapple, C., & Whinston, A. (1996). Decision support systems: A knowledge-based approach. Minneapolis, MN: West Publishing. Holsapple, C. W. (2005). The inseparability of modern knowledge management and computer-based technology. Journal of Knowledge Management, 9(1), 42–52. doi:10.1108/13673270510582956
A Resource-Based Perspective on IT, KM, and Firm Performance
Holsapple, C. W., & Joshi, K. D. (2004). A formal knowledge management ontology: Conduct, activities, resources, and influences. Journal of the American Society for Information Science and Technology, 55(7), 593–612. doi:10.1002/ asi.20007
Karimi, J. K., Somers, T. M., & Bhattacherjee, A. (2007). The role of information systems resources in ERP capability building and business process outcomes. Journal of Management Information Systems, 24(2), 221–260. doi:10.2753/MIS07421222240209
Holsapple, C. W., & Singh, M. (2003). The knowledge chain model: Activities for competitiveness. In C. W. Holsapple (Ed.), Handbook on knowledge management 2: Knowledge directions (pp. 215-251). Berlin/Heidelberg, Germany: Springer-Verlag.
Laudon, K. C., & Laudon, J. P. (2006). Management information systems: Managing the digital firm. NJ: Prentice Hall.
Holsapple, C. W., & Singh, M. (2005). Performance implications of the knowledge chain. International Journal of Knowledge Management, 1(4), 1–22. Holsapple, C. W., & Wu, J. (2008). In search of a missing link. Knowledge Management Research & Practice, 6(1), 31–40. doi:10.1057/palgrave. kmrp.8500170 Huff, S. L., & Munro, M. C. (1985). Information technology assessment and adoption: A field study. MIS Quarterly, 9(4), 327–340. doi:10.2307/249233 InformationWeek. (2007). Global IT spending to reach $1.48 trillion in 2010, IDC says. Retrieved from http://www.informationweek.com/news/ management/outsourcing/showArticle.jhtml;js essionid=JB4IADNZACBP2QSNDLRSKH0C JUNN2JVN?articleID=196802764&_requestid=885619 Jones, K. G. (2004). An investigation of activities related to knowledge management and their impacts on competitiveness. Unpublished doctoral dissertation, University of Kentucky.
Liao, C., & Chuang, S. (2006). Exploring the role of knowledge management for enhancing firm’s innovation and performance. In Proceedings of the 39th Hawaii International Conference on System Sciences. Mata, F. J., Fuerst, W. L., & Barney, J. B. (1995). Information technology and sustained competitive advantage: A resource-based analysis. MIS Quarterly, 19(4), 487–505. doi:10.2307/249630 Mathiyalakan, S., Ashrafi, N., Zhang, W., Waage, F., Kuilboer, J., & Heimann, D. (2005, May 15-18). Defining business agility: An exploratory study. In Proceedings of the 16th Information Resource Management Association International Conference, San Diego, CA. Motsenigos, A., & Young, J. (2002). KM in the U.S. government sector. KMWorld. Retrieved from http://www.kmworld.com/Articles/Editorial/Feature/KM-in-the-U.S.-government-sector-9397.aspx Newbert, S. L. (2007). Empirical research on the resource-based view of the firm: An assessment and suggestions for future research. Strategic Management Journal, 28(2), 121–146. doi:10.1002/ smj.573 O’Brien, J. A., & Marakas, G. (2007). Introduction to Information Systems (13th ed.). McGraw-Hill/ Irwin.
1111
A Resource-Based Perspective on IT, KM, and Firm Performance
Ray, G., Muhanna, W. A., & Barney, J. B. (2005). Information technology and the performance of the customer service process: A resource-based analysis. MIS Quarterly, 29(4), 625–652. Tanriverdi, H. (2005). Information technology relatedness, knowledge management capability, and performance of multibusiness firms. MIS Quarterly, 29(2), 311–334. Wade, M., & Hulland, J. (2004). Review: The resource-based view and information systems research: Review, extension, and suggestions for future research. MIS Quarterly, 28(1), 107–142. Wheeler, B. C. (2002). BEBIC: A dynamic capabilities theory for assessing net-enablement. Information Systems Research, 13(2), 125–146. doi:10.1287/isre.13.2.125.89 WITSA. (2000). Digital planet 2000. Arlington, VA: World Information and Technology Services Alliance. Wu, J. (2008). Exploring the link between knowledge management performance and firm performance. Unpublished doctoral dissertation, University of Kentucky.
KEy tErMs AND DEFINItIONs Agility: Refers to an organization’s ability to detect changes, opportunities, and threats in its business environment and to provide speedy and focused responses to customers and other stakeholders by reconfiguring resources and processes and/or by developing strategic partnerships and alliances (Mathiyalakan et al., 2005). Information Technology: Can be defined as a broad range of technologies involved in informa-
tion processing and handling, such as computer hardware, software, telecommunications, and databases (Huff & Munro, 1985). IT Capability: Refers to an organization’s ability to identify IT meeting business needs, to deploy IT to improve business process in a cost-effective manner, and to provide long-term maintenance and support for IT-based systems (Karimi et al., 2007). IT Relatedness: Is defined as “the use of common IT infrastructures and common IT management processes across business units” (Tanriverdi 2005, p. 317). IT-KM Competence: Is defined as a firm’s IT and KM ability and resources that are peculiar to achieving and sustaining business success. Innovation: Refers to the ability of creating valuable and useful new product, new services, new technology, or production process (Liao & Chuang, 2006). Knowledge: Refers to a fluid mix of framed experience, values, contextual information, and expert insight that offers a framework for interpreting, assimilating, and integrating new experiences and information (Davenport & Prusak, 1998). Knowledge Management: Is “an entity’s systematic and deliberate efforts to expand, cultivate, and apply available knowledge in ways that add value to the entity, in the sense of positive results in accomplishing its objectives or fulfilling its purpose” (Holsapple and Joshi 2004, p. 593). KM Performance: Is the degree to which KM activities harness organizational resources to achieve the goals or purposes of KM initiatives. A Resource: Can be defined as a rare and inimitable firm-specific asset that adds value to firms’ operations by enabling them to implement strategies that improve efficiency and effectiveness (Karmi et al., 2007).
This work was previously published in Handbook of Research on Contemporary Theoretical Models in Information Systems, edited by Yogesh K. Dwivedi, Banita Lal, Michael D. Williams, Scott L. Schneberger and Michael R. Wade, pp. 296-310, copyright 2009 by Information Science Reference (an imprint of IGI Global).
1112
1113
Chapter 4.15
A Decision Support System for Selecting Secure Web Services Khaled M. Khan Qatar University, Qatar
INtrODUctION Web service is becoming an important area of business processing and research for enterprise systems. Various Web service providers currently offer diverse computing services ranging from entertainment, finance, and health care to real-time application. With the widespread proliferation of Web Services, not only delivering secure services has become a critical challenge for the service providers, but users face constant challenges in selecting the appropriate Web services for their enterprise application systems. Security has become an important issue for information systems (IS) managers for a secure integration of Web services with their enterprise systems. Security is one of the DOI: 10.4018/978-1-59904-843-7.ch025
determining factors in selecting appropriate Web services. The need for run-time composition of enterprise systems with third-party Web services requires a careful selection process of Web services with security assurances consistent with the enterprise business goal. Selection of appropriate Web services with required security assurances is essentially a problem of choice among several alternative services available in the market. The IS managers have little control of the actual security behavior of the third-party Web services, however, they can control the selection of right services which could likely comply their security requirements. Selecting third-party Web services arbitrarily over the Internet is critical as well as risky. With increasing security challenges to the enterprise systems, there is a need for an automatic decision support system (DSS) for the selection of
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Decision Support System for Selecting Secure Web Services
appropriate secure Web services. A DSS analyzes security profiles of candidate Web services and compares them with the security requirements of the enterprise system. The IS managers can make decisions from such systems more easily regarding which Web service is to be integrated with their applications. A DSS could make a comparative analysis of various security properties between a candidate Web service and the enclosing enterprise system including the consequences of different decision alternatives in selecting Web services. It could also project the likely additional security properties needed for the system if the candidate Web service lacked required properties. The complex nature of selecting secure Web services could not be easily managed without such a DSS support. With the rapidly evolving nature of security contexts in the field of enterprise systems, decision support systems for selecting secure Web services can play an increasingly important role. This article proposes an architecture of an easyto-use security decision support system (SDSS) for selecting Web services with security assurances consistent with the enterprise business goal. The SDSS stores security profiles of candidate Web services, compares properties with the security requirements of the enterprise system, and generates alternatives with consequences. Supporting the choice making process involves the evaluation and comparison of alternative Web services in terms of their security properties. To minimize the risks of selecting the wrong Web services for the enterprise systems, the SDSS can provide managers with consistent and concise guidance for the development of security criteria. Our proposed SDSS has been developed to provide IS managers with information necessary to make informed decisions regarding the selection of Web services. The basic components of the SDSS include a knowledge base of various security properties and an inference mechanism which uses a set of rules. The architecture consists of three components: (i) Defining security criteria; (ii) Security profiling of Web services; and (iii) Generating alternatives.
1114
bAcKGrOUND Making decisions concerning the selection of Web services with security compliances often strains the cognitive capabilities of the IS managers because many complex attributes are involved. Analyzing these complex attributes and predicting the security outcome of independent Web services is a daunting task. The human intuitive judgment and decision making capability is rather limited, and this ability deteriorates further with the complexity of assessing security issues manually. The final decision to select a particular Web service for an enterprise system is critical because such a decision is considerably influenced by many complex security attributes of the service. A computeraided decision making process may manage this complexity in a more optimal way. One of many decision-making approaches in which decisions are made with the help of computer-aided process is generally called decision support system (DSS). A DSS can take many different forms. In general, a DSS is a computerized system for helping people make decisions (Alter, 1980; Power, 1997, 2007). According to Finlay (1994) and Turban (1995), a DSS is an interactive, flexible, and adaptable computer-based information system, especially developed for supporting the decision making. In our context in this article, we emphasize a knowledge-driven decision that helps managers to make a choice between alternative Web services based on their supporting security properties. It is an interactive computer-based system that aids IS managers in making judgments and choices regarding the selection of Web services which match their expectation. This article focuses primarily on the components that process various criteria against the provided data and generates best alternatives. During the process of selecting appropriate Web services for the enterprises, IS managers often make decisions on which Web services should be integrated with their application. Considering the value of the information assets of the organizations,
A Decision Support System for Selecting Secure Web Services
it is unlikely that managers only assess the business functionalities that Web services provide for their organizational need. They should also consider the security implications of using Web services with their applications. The decision making process of selecting security-aware Web services requires a systematic approach. A decision support system could aid managers with an automated system. In the current practice, IS managers use Web services without properly assessing the security compliances of the services (Khan, 2006, 2007). Managers could use decision support systems which could significantly improve the selection process of secure Web services. Many decision making techniques already published in the literature can be used for the selection of an entity among various alternatives. Classical multi-attribute utility theory (MAUT) by Keeney and Raiffa (1976), analytical hierarchy process (AHP) by Saaty (1980), and a recent approach by Besharati, Azram, and Kannan (2005) for the selection of product design are among these approaches. MAUT has been used in many application domains for modeling decision maker’s preferences for ranking a set of alternative decisions. AHP has been extensively used in marketing and management areas. However, using any of these models in security issues has not been reported yet. An agent-based DSS methodology reported in Choi, Kim, Park, and Whinston (2004) and a market-based allocation methodology (Parameswaran, Stallaert, & Whinston, 2001) are not also applicable in the security arena. Although these two DSS methodologies are used in product selection, their applicability in the selection of secure software product is limited. Most research works in the area of Web services security often focus on how to make Web services secure. Some papers propose various security metrics models as reported in Berinato (2005) and Payne (2002). Khan (2006) proposes a scheme for assessing security properties of software components. The assessment scheme provides a numeric score to the candidate soft-
ware component indicating a relative strength of the security properties of the component. Payne (2002) proposes a seven step methodology to guide the process of defining security metrics. Berinato (2005) argues for a constant measure of security incidents and it could be used to quantify the efficiency of the deployed security functions. The National Institute of Standards and Technology (Swanson, Bartol, Sabato, Hash, & Graffo, 2003) defines a security metrics guide for information technology systems. The document provides guidance on how an enterprise through the use of metrics identifies the security needs, security controls, policies, and procedures.
sEcUrIty DEcIsION sUPPOrt systEM (sDss) The environment of the proposed SDSS system consists of a preprocess and the architecture as illustrated in Figure 1. The preprocess has two related activities: (i) specification of the security requirements or security objectives of the identified functionality, and (ii) gathering security properties of the candidate Web services. These two preprocess activities are required in the process of constructing the alternatives. Managers identify what security requirements their enterprise system needs from the Web services. This essentially sets the security requirements consistent with the enterprise-wide security requirements and assurances. For example, a financial enterprise system needs a service for calculating tax offset. Many third-party Web services may offer this functionality with various security functions such as confidentiality of data, integrity of the calculated data, and so forth. The managers must determine what type of security the identified functionality will have. Security requirements related to the functionality may be determined based on the threats, vulnerabilities, and risks associated with the functionality.
1115
A Decision Support System for Selecting Secure Web Services
Figure 1. The environment of security decision support system (SDSS)
The selection of a list of candidate Web services is based on the identified functionality. At this stage the conformity of the defined security requirements is not checked. We select only those Web services which can provide the desired functionality such as calculating tax offset. This also involves gathering information regarding the security properties of the candidate Web services. We call it security profiling. In this activity, security properties of each candidate Web service are collected from the published security claim, available user’s guide, and from enquiries (Khan, 2006). This profiling process enlists all security properties supporting the claimed security function of each Web service.
tHE ArcHItEctUrE A DSS can consist of various types of components based on the domain of application and the type of system. Three fundamental components of DSS are identified by Sprague and Carlson (1982): the database management system, the model-based management system, and the user interface. Haag, Cummings, McCubbrey, Pinsonneault, and Donovan (2000) decompose these three components into more detail. Consistent with these, Figure 2 depicts three major components of the proposed SDSS. The identified security requirements and the security properties supported by each candidate Web service are the main input to the system. These data are structured and mapped into the same format by the system. The security profiles are stored in the knowledge base. The output of the system is a rating of the candidate Web services.
DEFINING sEcUrIty crItErIA This component of SDSS defines the criteria for the security requirements in a structured form based on the information provided by the IS managers. The elements of security criteria are defined according to the ISO/IEC Standard 15408, the Common Criteria (CC) for Information Technology Security Evaluation (Common Criteria,
Figure 2. A decision support system for selecting secure Web services
1116
A Decision Support System for Selecting Secure Web Services
1999). The CC provides evaluation measures of security by means of a common set of requirements of the security functions of a system, and a set of evaluation measures. The entire approach is quantitative and it describes the security behavior of functions expected of an information system. CC gives a comprehensive catalogue of high level security requirements and assurances for information systems products. CC consists of 11 classes for generic grouping of similar types of security requirements: security audit, communication, cryptographic support, user data protection, identification and authentication, security management, privacy, protection of system security functions, resource utilization, system access, and trusted path and channels. The criteria are formulated in a tree structure. The idea is to decompose a complex decision making component into simpler subcomponents that are easily quantifiable and comparable in terms of values. In other words, security requirements are modeled into smaller subcomponents using a tree structure. The root of the tree (level 0) is the name of the functionality. Each node at level-1 represents a CC security class such as user data protection, authentication, security audit, and so forth. Level-2 and lower level nodes represent the decomposed security attributes of the requirements. The value associated with the each level-1 node signifies the relative importance of the node compared to other nodes at the same level. All values irrespective of the total number of nodes at level-1 must be summed up to 100. Figure 3 illustrates an example.
The example in Figure 3 illustrates the security requirements of the functionality calculate tax. The functionality has two security requirements: user data protection and authentication, as shown in level-2 nodes. Each of the two requirements has equal importance as shown with the value 50. The user data protection has one security property as shown at level-2 node: encrypted(amount, ID). This property is further decomposed into two attributes: the key length should be 128-bit and encryption algorithm should be RSA. The requirement authentication has a security attribute: digital_signature(service_provider). This property has one attribute: algorithm used which should be RSA. This information will be stored in the knowledge base. In this example, several key issues are addressed in order to check the conformity of user data protection and authentication. For instance, if the user data protection is achieved by means of encrypted data, then next issues include what is the length of the key and which algorithm is used in the encryption. These are the specific criteria to check whether a Web service could provide the required functionality with the desired level of confidentiality of tax data.
stOrING sEcUrIty PrOFILE The security information about each Web service gathered in the preprocess are entered into the system. The format of data is the same as
Figure 3. An example of security criteria
1117
A Decision Support System for Selecting Secure Web Services
shown in Figure 3. SDSS provides a template in order to capture the security requirements of the candidate services. Let us consider a scenario. Assume we have selected three candidate Web services called A, B, and C. All these services provide the same functionality, calculate tax, but with varying security properties. Web service A supports user data protection with encryption using a key length of 256-bit, but does not provide authentication. Service B provides authentication with digital signature using RSA algorithm, but does not support user data protection. All of the data are entered into the system. Logic programming (Baral & Gelfond, 1993) can be used as a formal reasoning tool to characterize the security requirements and security profiles of the candidate Web services. The simple structure of the logic program allows us to represent a complicated form of security knowledge and its properties, and yet it is based on mathematical logic (Das, 1992). In logic programming, security properties can be expressed in symbolic notations such as encrypted, digital_signature. Some of the properties are identical with those defined in BAN logic (Burrows, Abadi, & Needham, 1989).
GENErAtING ALtErNAtIVEs This is the heart of the SDSS. This component produces a comparative rating of all candidate Web services based on the conformity of their security profiles with the enterprise-wide security criteria. Inference rules are used to calculate the deviation between the security criteria and the profiles. This component uses a rule-based approach in order to make a comparative rating of the candidate services. Security properties are reasoned about with the inference rules of logic programming. The inference rules are applied to check whether the security profile of a Web service is matched with the security requirements of the enterprise system. The rating of a candidate service is based on a value ranging from 0 to 100. The points are
1118
calculated on the basis of the presence or absence of certain security properties in the Web services. For example, if a node with a value 40 at level-1 has full compliance with the security criteria, the point will be 40 out of 100. IS managers can evaluate the rating of the candidate services and make a decision regarding which Web service will be selected. They can also modify or adjust the security requirements differently and get a different rating of the same set of Web services. However, it is also possible to add newly identified security properties with the profile of a candidate Web service and process the properties for a new rating.
FUtUrE trENDs We plan to implement the proposed approach as a prototype in order to test its applicability. We are currently evaluating some existing tools such as smodels and lparse (Syrjanen, 2000) which could be utilized as the supporting components of the SDSS in order to facilitate the inference mechanisms. The approach can be further expanded to select Web services with other nonfunctional properties such as usability, maintainability, reusability, and so forth. Further research includes the automatic verification of the security profile of Web services with their implementation. The more challenging research could be directed to automate the significant activities of the preprocess, such as automatic gathering of security profiles about the Web services.
cONcLUsION The article has presented a framework for a security decision support system for selecting Web services with appropriate security assurances. The article argues that the selection process of appropriate Web services for the enterprise application needs an automatic tool support such as
A Decision Support System for Selecting Secure Web Services
SDSS. The approach could be automated to aid enterprise decision support systems. This can be used in strategic planning of information systems as well. The main contribution of this article is a framework for the assessment of Web services security properties on which further work could be initiated. The evaluation method presented here is considered flexible enough, as security requirements can be altered by the IS managers as they see appropriate for their needs.
rEFErENcEs Alter, S. L. (1980). Decision support systems: Current practice and continuing challenges. Reading, MA: Addison-Wesley Pub. Baral, C., & Gelfond, M. (1993). Representing concurrent actions in extended logic programming. In Proceedings of the 13th International Joint Conference on Artificial Intelligence, Chambery, France (pp. 866-871). Berinato, S. (2005, July). A few good metrics. CIO-Asia Magazine. Online version http://www. csoonline.com/read/070105/metrics.html Besharati, B., Azram, S., & Kannan, P. K. (2005). A decision support system for product design selection: A generalized purchase modeling approach. Decision Support Systems, 42, 333–350. doi:10.1016/j.dss.2005.01.002 Burrows, M., Abadi, M., & Needham, R. (1989, December). A logic of authentication. ACM Operating Systems Review, 23(5), 1-13. A fuller version was published as DEC System Research Centre Report Number 39, Palo Alto, California, February. Choi, H. R., Kim, H. S., Park, Y. J., & Whinston, A. B. (2004, March). An agent for selecting optimal order set in EC marketplace. Decision Support Systems, 36(4), 371–383. doi:10.1016/ S0167-9236(03)00027-7
Common Criteria. (1999). Common criteria for information technology security evaluation (ISO/ IEC 15408). NIST. Retrieved December 6, 2007, from http://csrc.nist.gov/cc/ Das, S. K. (1992). Deductive databases and logic programming. Addison-Wesley. Finlay, P. N. (1994). Introducing decision support systems. Oxford, UK, Cambridge, MA: NCC Blackwell; Blackwell Publishers. Haag, S., Cummings, M., McCubbrey, D., Pinsonneault, A., & Donovan, R. (2000). Management information systems: For the information age, (pp. 136-140). McGraw-Hill Ryerson Limited. Keeney, R. L., & Raiffa, H. (1976). Decisions with multiple objectives. New York: John Wiley and Sons. Khan, K. (2007). Selecting Web services with security compliances: A managerial perspective. To appear in Proceedings of the Pacific Asia Conference on Information Systems (PACIS). Khan, K., & Han, J. (2006, April). Assessing security properties of software components: A software engineer’s perspective. In Proceedings of the Australian Software Engineering Conference, IEEE Computer Society. Parameswaran, M., Stallaert, J., & Whinston, A. B. (2001, August). A market-based allocation mechanism for the DiffServ framework. Decision Support Systems, 31(3), 351–356. doi:10.1016/ S0167-9236(00)00143-3 Payne, S. (2002). A guide to security metrics. SANS Institute. Power, D. J. (1997). What is a DSS? The Online Executive Jo971021.htmlurnal for Data-Intensive Decision Support, 1(3). Online version http:// www.taborcommunications.com/dsstar/97/1021/
1119
A Decision Support System for Selecting Secure Web Services
Power, D. J. (2004, February). Specifying an expanded framework for classifying and describing decision support systems. Communications of the Association for Information Systems, 13(13), 158–166. Power, D. J. (2007). A brief history of decision support systems. DSSResources.COM. Retrieved December 6, 2007, from http://DSSResources. COM/history/dsshistory.html Saaty, T. L. (1980). The analytical hierarchy process. New York: McGraw-Hill. Sprague, R. H., & Carlson, E. D. (1982). Building effective decision support systems. Englewood Cliffs, NJ: Prentice-Hall. Swanson, M., Bartol, N., Sabato, J., Hash, J., & Graffo, L. (2003, July). Security metrics guide for information technology systems [Special Publication 800-55]. National Institute of Standard and Technology (NIST), (p. 99). Syrjanen, T. (2000). Lparse 1.0 user’s manual. University of Helsinki. Turban, E. (1995). Decision support and expert systems: Management support systems. Englewood Cliffs, NJ: Prentice Hall.
KEy tErMs AND DEFINItIONs Security Class: A security class represents a generic grouping of similar types of security objectives that share a common focus while differing in coverage of security functions as well as security properties.
Security Criteria: A security criteria is a rule with a set of security properties that can be used to assess a security function or security objective. A security criteria tests whether a security function has desired security properties. Security Function: A security function is the implementation of a security policy as well as a security objective. It enforces the security policy and provides required capabilities. Security functions are defined to withstand certain security threats, vulnerabilities, and risks. A security function usually consists of one or more principals, resources, security properties, and security operations. Security Objective: A security objective is an abstract representation of a security goal. A security objective defines a desired security state of an entity or data of the system. It represents the main goal of a security policy. Security Profiling: Security profiling is the security characterization of an entity, a service, or a component in terms of security objectives as well as security properties. It spells out the actual implemented security characteristics of an entity. Security Property: A security property is an implementation element used in a security function. A set of security properties can form a security function. A security property is an element at the lowest level of the implementation. Web Services: A Web service is a platformindependent and self-contained software with defined functionality that can be available over the Internet. It provides a standard way of integrating mechanisms with enterprise applications over the net. A Web service can perform one or more functionalities for the complex application system.
This work was previously published in Encyclopedia of Decision Making and Decision Support Technologies, edited by Frederic Adam and Patrick Humphreys, pp. 211-217, copyright 2008 by Information Science Reference (an imprint of IGI Global).
1120
1121
Chapter 4.16
ERP Systems Supporting Lean Manufacturing in SMEs Pritish Halgeri Kansas State University, USA Roger McHaney Kansas State University, USA Z. J. Pei Kansas State University, USA
AbstrAct Small and medium enterprises (SMEs), more than ever, are being forced to compete in a global economy with increasingly complex challenges. This new economy has forced SMEs to become more responsive and agile in operational, tactical and strategic areas while requiring thoughtful integration between business functions and manufacturing/ production/ service operations. Enterprise Resource Planning (ERP) and Lean manufacturing are two production control methodologies that have been implemented in various ways. In early incarnations, ERP systems were considered a hindrance to Lean manufacturing efforts and were criticized for encouraging large inventories and slower production. The explosive growth of e-business methodologies and the resulting pressure to become nimble DOI: 10.4018/978-1-60566-892-5.ch005
and embrace rapid change forced many SMEs to rethink their production approaches, particularly in regard to where they stand in relation to these two methodologies. Over time, ERP vendors recognized the power and advantages of Lean manufacturing and developed ways to incorporate Lean-related features into their software. The main objective of this chapter is to explore how ERP and Lean methodologies can coexist in SMEs. The chapter discusses misconceptions about the fit between ERP and Lean then summarizes differences and synergies between the two methodologies. The chapter emphasizes how linking ERP and Lean methods can lead to competitive advantage then explores key Lean toolsets available in leading ERP systems used by SMEs. Further focus is provided with additional insight on several leading ERP vendors offering Lean-enabled software modules. These include Oracle, TTW WinMan and Pelion Systems.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
ERP Systems Supporting Lean Manufacturing in SMEs
INtrODUctION Small and medium enterprises (SMEs), more than ever, are being forced to compete in a global economy with increasingly complex challenges. This new economy has forced SMEs to become more responsive and agile in operational, tactical and strategic areas while requiring thoughtful integration between business functions and manufacturing/ production/service operations. When faced with similar pressures, larger firms migrated to expensive ERP systems. As early as 1999, researchers (Gable & Steward, 1999) suggested SMEs would follow suit and suggested reasons motivating this phenomenon. First, the larger enterprise market for ERP systems was becoming saturated and ERP vendors were hungry for new markets. Secondly, these larger firms were pushing ERP vendors to create software to leverage inexpensive Internet technologies that would promote closer integration with their SME partners along the supply chain to obtain a variety of efficiency-based benefits. Thirdly, SMEs made up a large portion of regional economies and represented a high percent of overall manufacturing and service firms. And finally, ERP packages designed for SMEs had become more sophisticated, cost efficient and upwardly scalable for growth oriented firms. In spite of these obvious incentives many SMEs were slow to adopt ERP technologies. According to Aladwani (2001) two fundamental sources of resistance to innovations like an ERP exist: perceived risk and habit. According to Aladwani (2001), perceived risk refers to one’s perception of the risk associated with the decision to adopt the innovation, i.e. the decision to accept an ERP system and habit refers to current practices that one is routinely doing. Koh & Simpson (2005) suggest this is pronounced in SMEs due to widespread informal culture and a disregard for formalizing business processes. Often in SMEs, a worker wears many hats and as a result, operations are conducted on the fly and without formal procedures or documentation. Aladwani (2001)
1122
suggests this can be hard to overcome and for some time, that appeared to be the case. In recent years, SMEs involved in the businessto-business (B2B) market have worked hard to develop delivery performance capability compatible with larger corporate customers. In many cases this means that the SME is required to interface with their clients’ ERP systems. Larger enterprises rely on big ERP system vendors such as SAP, ORACLE, and others (Rashid, Hossain & Patrick, 2002). The implementation cost of these systems is high and installation complex making it difficult for SMEs to follow suit. In response, midrange and less complex systems have been developed both by the large ERP vendors and by smaller software companies. In order to continue taking advantage of being, smaller nimble companies, and satisfy the needs of the larger corporate partners, SMEs may need to use their ERP software in conjunction with other proven systems and methodologies such as Lean planning and control tools. SMEs may need to combine capabilities to continue using other concepts such as just-intime (JIT) and optimized production technology (Cheng & Podolsky, 1993; Deep, et al. 2008; Koh & Simpson, 2005). This article specifically looks at progress made in integrating Lean production methodologies with ERP systems.
bAcKGrOUND competing Philosophies Over the past couple decades, philosophies related to the most effective manner to run manufacturing operations have been debated and evolved greatly (Nakashima, 2000). Added to the mix has been increased competition and expectations for rapid production changes and retooling. This has intensified the need for more efficient and cost effective manufacturing and put pressure on managers and production engineers to develop new and better solutions.
ERP Systems Supporting Lean Manufacturing in SMEs
The methodology debate has not been without controversy and opposing philosophies have emerged as different means to address the same basic problem (Spearman & Zazanis, 1992). Nowhere is this situation more apparent than where manufacturers are torn between two camps concerning production control (Piszczalski, 2000). One embraces Enterprise Resource Planning systems (ERP) where for the past two decades, organizations have spent billions of dollars and countless hours installing enterprise wide systems (Bradford, Mayfield & Toney, 2001). The other camp is Lean manufacturing (Piszczalski, 2000), pioneered in Japan by Toyota Motor Corporation but now embraced in the U.S. by literally thousands of firms, particularly SMEs (Bartholomew, 1999). ERP and Lean have emerged from fundamentally different approaches to production. As a result, misconceptions that Lean and ERP approaches do not mix well have emerged (StegerJensen & Hvolby, 2008). In some cases, these misconceptions have gone as far to argue that ERP systems are actually the antithesis of Lean manufacturing and are largely responsible for a variety of inefficiencies (Nakashima, 2000). Others say ERP is a waste of money and time in no uncertain terms. For instance, in 1999, Bartholomew reported opponents of ERP as saying: We are staying away from ERP because it doesn’t work. To do both ERP and Lean jeopardizes the success rate of either (Bartholomew, 1999). This opinion is one side of many arguments both pro and con. Others have said: only by using computer systems (ERP) manufacturers can possibly get their arms around the Herculean task of recognizing the multitude of constraints and issues that inevitably impact operations and planning (Piszczalski, 2000). To them, it is obvious that ERP systems have an important purpose in gathering enterprise data that can and should be used in conjunction with Lean methodologies (Bradford, Mayfield & Toney, 2001). Added to competitive pressures, the explosive growth of information and telecommunications
technologies and the resultant rise of e-business (Kent, 2002) have forced most SMEs to revisit their production control methodologies and reevaluate where they stand in relation to ERP use and Lean manufacturing implementation. As attitudes have changed and information exchange increased, Lean practitioners have begun to adopt supportive ERP software to facilitate their advanced Lean manufacturing initiatives and allow the rich compilation of organizational data and production history take them to a new level of operational excellence (Bragg, 2004).
Lean Enabled ErP Development The area of Lean enabled ERP software has become the subject of many recent and important, ongoing developments. However, very few researchers have examined this area to discuss the availability of Lean enabled ERP systems (Halgeri, et al, 2008). Additionally, not many researchers describe the Lean toolset features offered by ERP vendors. This article will attempt to rectify that by discussing the differences between ERP and Lean methodologies and then examining available Lean toolsets provided by several ERP vendors. The article will go on to provide a list of ERP vendors offering support in Lean manufacturing then provide additional details on several selected vendors together with the expected benefits of their software solutions.
Enterprise resource Planning (ErP) ERP is defined as a method for the effective planning and controlling of all the resources needed to take, make, ship and account for customer orders in a manufacturing, distribution or service company (Miller, 2002). ERP software attempts to integrate all departments and application modules into one computer system to serve each department’s needs from a central repository. ERP effectively eliminates standalone computer systems for each functional business silo or soft-
1123
ERP Systems Supporting Lean Manufacturing in SMEs
ware module (e.g. manufacturing/production, warehousing, accounting, marketing, logistics, human resources, finance, etc.) and replaces them with an integrated program composed of various modules representative of required components of an organization’s business. The primary difference is that these components are now linked with a common database therefore giving all entities within the organization the ability to share and view desired information while eliminating data entry and update redundancies, and effectively enhancing communication with easy access to work flows. (Koch, 2008). Additionally, ERP solidly integrates all necessary business functions, such as production planning, accounting, purchasing, inventory control, marketing, sales, finance, and human resources, into a single system with a shared database (Li, Liao & Lei, 2006) which enables a variety of automatic data collection not previously possible. As a direct result of ERP systems’ abilities to collect data and improve the ease of information sharing, several other advantages generally result from its use. For example, ERP systems consistently improve SMEs’ productivity, as the systems eliminate the need for multiple entries of identical data, reduce the possibility of errors and inconsistencies, and reduce time spent on needless phone calls and inquiries to other departments. Other advantages provided to SMEs using ERP systems include improved tracking and forecasting abilities, better customer service, ability to standardize manufacturing processes more easily, and in some instances, cost savings after initial start-up costs are recouped (Jacobs, 2007). While ERP systems possess numerous advantages, key disadvantages also exist. For instance, ERP systems require extensive installation and configuration processes that may take from three months to a year or more depending on the size and complexity of the organization. For some SMEs this may be less of an issue since less customization may be required. ERP systems often demand reengineering business processes. Installing the
1124
software without changing the business model to match encoded best practices can result in fewer benefits, and may even decrease efficiency and productivity. Additionally, employees must be trained to use both new software and new business processes in the conduct of their work tasks. Another disadvantage relates to cost factors. ERP systems can be expensive to install and run. Hidden costs also exist. For example, training, data conversion, testing, and updates all add to the price tag. Of course, these costs are generally part of any organizational information system and should be viewed within that perspective.
Lean Manufacturing Lean manufacturing is a term given to a family of related methodologies that seek to streamline production processes. Sometimes Lean is referred to as Just-in-Time (JIT) manufacturing, kanban system, Toyota Production System, or flow manufacturing. A primary feature of Lean manufacturing how it uses demand to pull items into inventory and through the manufacturing process. Lean attempts to sequence these items on flow lines to maximize resource utilization (Kent, 2002). A key goal of Lean manufacturing is the continuous reduction and even eventual elimination of all waste in the production process (Mekong Capital, 2004; Strategosinc, 2008). In general, Lean manufacturing seeks to minimize all the resources (including time) used in various enterprise activities. A key aspect of Lean is to identify and reduce or eliminate non-valueadded activities in design, production, supply chain management, and streamline customer interaction. Lean principles and practices aim to reduce cost through the relentless removal of waste and through the simplification of all manufacturing and support processes (Miller, 2002) consistent with continuous quality improvement techniques.
ERP Systems Supporting Lean Manufacturing in SMEs
Key Principles of Lean Manufacturing Several key principles lay behind the development and use of Lean manufacturing techniques. Among these are: (1) Elimination of Waste: Lean thinking offers a definition of value and therefore allows an organization to determine what activities and resources are needed to create and maintain this value (Poppendieck, 2002). (2) Pull Production: In its most basic form, downstream activities inform upstream activities when replenishment is required. (3) Value Stream Identification: To deliver products as specified by customers, an assessment of required actions to move through purchasing, production, distribution and other parts of the value stream are necessary (Burton & Boeder, 2003; Paez, et. al. 2004) (4) Standardization of Processes: Lean manufacturing seeks to develop standardized work processes to eliminate variation in the way individual workers may perform their tasks. Lean may use a combination of training techniques and process design to ensure this is possible (Mekong Capital, 2004). (5) Quality at the Center: At the center of Lean manufacturing is a commitment to quality improvement. Lean encourages the elimination of defects at the source and strives to ensure quality inspection is completed by workers during in-line production process (Poppendieck, 2002). (6) People Add Value: Lean manufacturing values the human component and seeks to add value to processes through human interaction (Arc Strategies, 2007). (7) Continuous Improvement: Lean manufacturing is never content with the status quo. It seeks to improve and enhance efficiency and production practices. So Improvement is never truly complete (Turbide, 2005).
(8) Continuous Flow: Lean manufacturing seeks to implement continuous production flows free of any interruptions, bottlenecks, detours, backflows, unwanted changes or waiting (Mekong Capital, 2004)
Lean Manufacturing Shortcomings Lean manufacturing is not without shortcomings. Key features in Lean can only be used to their fullest potential in situations where a stable master schedule exists. This means that current processing must match with capacity. Unexpected changes or an influx of unexpected orders can be difficult to accommodate. Further, changes in lead times can result in problematic changes. Many manufacturers expecting these situations may not use Lean methodologies in the first place. Another potential shortcoming is the inability to share and communication electronic transaction data with business operations and other parts of the enterprise. In Lean, information flow is minimized (Nagendra & Das,1999).
ErP AND LEAN: cAN tHEy cOEXIst? ErP and Lean compared Although both ERP and Lean manufacturing seek to provide a similar outcome to an organization, the conflict between the methodologies can be attributed to implementation and philosophical difference in general (Bartholomew, 2003). One of these differences involves the way goods and services are viewed in the two methodologies. For instance, Lean promotes a “pull” environment as the operative principle whereby goods or services are not purchased or produced until demand exists. Meanwhile in traditional ERP systems, the primary focus is on the “push” principle where forecasts are developed from historic data and orders, and then goods are produced to meet expected demand (Nakashima, 2000).
1125
ERP Systems Supporting Lean Manufacturing in SMEs
While not sounding too different, the basic philosophy between these two approaches is significant. First, in a push system, a production job begins on a start date computed by subtracting an established lead time from the date the goods are required, usually for shipping or for assembly (Spearman & Zazanis, 1992). A push system schedules the release of work based on demand. The key factor here is access to historic data that aids in the development of forecasts for items to be produced. Release time has to be fixed and cannot be modified easily for unexpected changes in the manufacturing system or product being manufactured. Figure 1 represents a general push production system. In a push system, information from the master production schedule (MPS) flows downstream toward the finished goods inventory and is computer generated (Kent, 2002) based on history and current orders. The MPS becomes the plan used by a company for production. The MPS considers all work currently in-house and develops an aggregate plan based on forecasts for individual parts, actual orders on hand, expectations for orders, and available capacity (Lee & Adams, 1986). A pull system takes the opposite approach and at its center is inherent flexibility. Pull systems Figure 1. Typical push system (Halgeri, 2008)
Figure 2. Typical pull system (Halgeri, 2008)
1126
use the actual end customer demand to drive the manufacturing process as much as possible. Pull systems attempt to match the rate of production for each product to the rate of customer consumption (Steger-Jensen & Hvolby, 2008) as it occurs. This has the net effect of reducing inventories and holding down costs throughout the supply chain. A pull system is characterized by the practice of downstream work centers pulling stock from previous operations (Spearman & Zazanis, 1992). In other words, nothing is produced by the upstream supplier until the downstream customer signals a need (Kent, 2002). Work is coordinated by using an information flow up the supply chain, having originated with the end consumer of the product (Spearman & Zazanis, 1992). Figure 2 shows a typical pull system. Information flows from the finished goods inventory (the customer) upstream towards the raw material inventory and in many SMEs can be conveyed visually by the use of a kanban (Kent, 2002). Many systems are now completely run by automated computer-driven systems. Other differences between traditional ERP systems and Lean manufacturing are summarized in Table 1.
ERP Systems Supporting Lean Manufacturing in SMEs
Table 1. Differences between ERP systems and lean manufacturing Aspect
ERP
LEAN
Reference
Emphasis
Planning
Continual Improvement of Production Process
Bradford, Mayfield & Toney, 2001
Production Plans
Combination of Actual Sales and Forecasted Sales Projected from Historic Data
Based solely on actual orders from internal, downstream processes or external customers
Nakashima, 2000
Transaction
Creates non value-added transactions because every event and activity in entire business is tracked (although most are done automatically)
Seeks to eliminate all waste including movement, unnecessary transactions, and materials. Seeks to speed and smooth production
Bartholomew, 1999
Traditional Approach
Top-down
Bottom-up
(Nakashima, 2000)
Time Horizons
As short as a few weeks but as long as a year or more. Average around 12-week mark.
Based on daily production capacity and actual orders received
(Nakashima, 2000)
Basic Focus
Forward-looking planning, communication, and scheduling tool
Cost reduction and process improvement methodology
Bartholomew, 2003
Platform
Computer dependent
Shop Floor oriented (often with computerization
(Piszczalski, 2000)
Production Concepts
Loaded machine work centers
Balanced production lines with synchronized Takt and cycle times
(Nakashima, 2000)
Information Philosophy
More is better. For example, more information, more flexibility, more functions and more features are desirable
Less is best. For example less variability, less material, less movement, less floor space are desirable
(Piszczalski, 2000)
Product Movement
Product moves in batches with specified operations being performed on the complete batch before moving on to the next operation
Each operation is completed on a single unit with specific unit moved to next operation in a continuous flow
(Nakashima, 2000)
Advantages of ErP over Lean ERP can provide advantages over Lean. For instance, researchers have claimed ERP, in general, is more applicable to more manufacturing firms than Lean (Spearman, Woodruff & Hopp 1990) and that Lean should be used in high volume production environments with relatively few part types (Bonvik, Couch & Gershwin, 1997) but not other environments. In addition to a number of functionalities, software can extend visibility into the plant. Kanban management systems can communicate current demand to suppliers, but generally fails to illuminate quality or other shop floor issues. Lean manufacturing’s line-of-sight management can also fall short since it typically also does not help internal departments serve customers beyond
their immediate point of interaction. This can inhibit the design of better products. Lean can create the low cost, efficient operations needed for survival. However, that is not the only goal of most companies where long term survival is also an important consideration. Thus, in many SMEs, plant software adoption and the use of historic data is expected to rise. And of course, the plant operations that can better respond, innovate, and share information will find greater opportunities for cost savings, future goods, and customers (Greene, 2004). Software can bring capabilities to the plant environment that are difficult or even impossible to achieve by any other mechanism. Of course, not every production operation will require all of the capabilities available in software systems, but for those needing to achieve specific goals, certain 1127
ERP Systems Supporting Lean Manufacturing in SMEs
features can be crucial. It is possible SMEs not specifically interested in implementing Lean may still acquire software tools typically found in ERP or other enterprise systems for use in their environments. Several examples of these features are: •
•
•
•
1128
Track and Trace: Many industries require full product histories that track all materials back to their sources for quality and other reasons. Required or not, a track and trace feature makes great business sense. Not only can warranty costs be reduced but brand image can be protected. Additionally, recall costs can be better controlled and supply problems can be identified and suppliers rated and ranked. Most ERP software systems include this feature. Performance Dashboard: Managers need access to performance data and a performance dashboard provides access to information as it is acquired. Timely and accurate views of data are essential to improving business performance. Work Instruction Tools: Lean manufacturing relies on work standardization. In many new software systems developed to support Lean and in many ERP systems, best practices down to the work instruction level are maintained. An easy way to ensure quality and efficiency is through the maintenance of up-to-date work instructions. This functionality often comes from specialized vendors and ERP software providers. Resource Allocation Tools: Production systems with responsibility to produce multiple products often face resource allocation problems. This may include work fixtures, capital equipment, or even skilled workers. Finite capacity scheduling systems and other resource allocation tools have been developed to aid with these situations.
Potential for Linking ErP and Lean Some academics and industry analysts have implied that ERP is old school and Lean manufacturing has replaced it. Others, of course, disagree, saying those claims are akin to saying the automobile chassis is obsolete because a new engine has been invented (Miller, 2002). It is important to make a distinction between an ERP and Lean manufacturing. ERP can be thought of as a large scale software system for linking all parts of the firm and encouraging best practice business processes. Lean manufacturing is one of these best practices mostly focused in the manufacturing area. At least, that’s one productive way of viewing the synergy that may emerge from a marriage of the two methodologies. As an example of how the broader set of organizational data can benefit Lean manufacturing, consider a shortcoming of many Lean initiatives. Often, these initiatives make rapid progress then the rate of improvement plateaus. The difficulty of managing pull operations in an environment where demands and product mix widely fluctuate is partly responsible for this plateau (Bragg, 2004). Another difficulty might be related to incomplete data. Imagine a 30% reduction in process cycle time resulting from a Lean kaizen event that does not get formally communicated to marketing, procurement, material planning, customer order management or the capacity planning functions (Metzger, 2008). What is perfectly obvious in manufacturing may not be communicated up the supply chain to other business operations. Visual pull systems developed for a local process must be communicated and linked through ERP to sustain the Lean program results across the SME (Metzger, 2008). This is especially important when the SME maintains separate production and business facilities. Having access to additional business data may lead to understanding where bottlenecks may still occur, or what operations may be further
ERP Systems Supporting Lean Manufacturing in SMEs
optimized. Without historic data combined with analytic tools, this can become increasingly difficult or even impossible. The data associated with an ERP can help identify these constraints to optimize demand flow manufacturing (Bragg, 2004). Other challenges to Lean include trying to modernize manual kanban methods that use printed cards, which work well within SMEs with contained environments or “line of sight” manufacturing facilities. Major difficulties can result when these systems are extended across the plant and to suppliers, since breakdowns in manual kanbans get lost at the local level (Bragg, 2004). Typical approaches for Lean used by most SMEs today do not provide an optimal return on investment for companies (Steger-Jensen & Hvolby, 2008) and financial data regarding each part of the operation may not be collected. Lean enabled modules offered by ERP suppliers and specialist suppliers may help overcome many of these challenges (Bragg, 2004). In fact, ERP vendors are starting to offer new solutions aimed to bridge the gap between the shop floor and ERP (Nakashima, 2000; Bradford, Mayfield, & Toney, 2001). These solutions may be offered in the form of Lean modules or add-on components (Bradford, Mayfield & Toney, 2001). Benefits gained by implementing Lean-enabled ERP modules can be illustrated by the following example provided by Nakashima (2000). Cerberus, a New Jersey-based manufacturer of commercial fire detection systems, and division of Siemens, implemented Lean techniques and American Software’s Flow Manufacturing application to achieve significant increases in both flexibility and productivity. Before implementing Flow Manufacturing application, a new model would be produced every 15-30 days. After implementing Flow Manufacturing, multiple models come off the production line every 20 seconds (Nakashima, 2000). Productivity increased by as much as 15-20%, floor space was reduced by 25-30%, finished goods inventory was reduced
by 50%, while sales volumes increased by 35% (Nakashima, 2000). These improvements are by no means insignificant.
ErP Vendors Offering Lean tools Many ERP vendors have introduced “enablers” that incorporate Lean manufacturing best practices into their systems. These enablers include a variety of new modules, database items, toolsets, and business process modifications that create new functionality in existing software (Nakashima, 2000). The following sections describe these features in more detail (Halgeri, 2008):
Toolset # 1: Just-in-Time A key difference between ERP implementation and Lean manufacturing has been the approach to production. Since ERP has traditionally used a push production system, adopting a philosophy of pull production has been a challenge. Many ERP vendors have begun to add a JIT toolset to their best practices procedures as a step in this direction. A primary functionality of this tool extends the pull concept through the entire supply chain from suppliers to customers. Of course, incorporating just-in-time features encourages flexible, small lot deliveries of parts and materials without the need for traditional purchase orders and this can be difficult to implement in an ERP system (Nakashima, 2000). One vendor, Oracle, and its Flow Manufacturing software includes multiple component replenishment routines that support multiple component demand patterns. This flexible approach to JIT procurement helps eliminate stock-outs. The software automates material replenishment processes through the creation of a self-regulating pull production system. Oracle also offers Feeder Line Synchronization processes that can create specific JIT schedules for complex, dependent demand to ensure parts are delivered when needed. This is ideal for products that must be built specifically
1129
ERP Systems Supporting Lean Manufacturing in SMEs
for customer orders, or are highly customized or variable. Using additional features, required components and parts can be ordered to ensure delivery at the perfect time to become available during production. Stable, predictable demand can use kanban management processes but more complex demand can access both ERP data and use JIT concepts for delivery (Oracle, 2006).
Toolset # 2: Convertors to Transform Multi-Level Bills-of-Material into Flat Bills with Event Sequencers Many ERP vendors have begun to offer functionality intended to eliminate traditional MRP-based bills-of-material that simulate the manufacturing process and establish start and due dates for each department. These time-phased and shop floor control routings are converted into flat bills without sub-assemblies or parent assembly product routing. These converters transform multilevel billsof-material into flat bills with event sequencers, emulating processes used in Lean manufacturing (Nakashima, 2000).
Toolset # 3: Analysis and Mapping Tools Lean manufacturing initiatives often begin with an analysis phase used to identify potential and existing production problems. The goal is to determine “quick wins” to immediately and dramatically improve production performance (Bragg, 2004). These quick wins may come about because of production practices or when Lean principles are used to examine other business practices (McManus & Richard, 2002). Several methods are used to accomplish this including Value Stream Analysis (VSA). VSA is used to ensure the product specified by the customer is delivered in a cost effective way and value is added throughout the supply chain (Paez, Dewees, Genaidy, Tuncel, Karwowski, & Zurada, 2004). Womack & Jones (1996) suggest several business methodologies including: problem solving (from design to product launch), 1130
information management (from order taking to delivery), as well as material transformation (from incoming raw materials to finished goods) are all key to conducting a value stream analysis. VSA demonstrates process steps for elimination, modification or improvement both with and without investment. A Value Stream Mapping (VSM) tool can be implemented to further improve this analysis by graphically illustrating current and desired processes in the value stream. This makes it easier to understand and implement recommendations derived from earlier or ongoing Value Stream Analysis activities (McManus & Richard, 2002). Several vendors offer both VSA and VSM support in their ERP software systems. First, Pelion’s ERP system is noted for its strengths in VSM. Their EASYVSM workbench software adapts to specific production environments to reduce manufacturing cycle times and to streamline work processes. Additionally, non-value added work can be identified and eliminated. This helps increase customer responsiveness and satisfaction (Pelion Systems, 2008). The following features of EASYVSM have been documented as mechanisms to improve product and material flows (QSG Team, 2006): • • • • • •
• •
Current and future state maps Collaborative, enterprise-wide value stream maps VSM presentation material Visual communication boards in multiple locations Cycle time organization features Features to manage changeover times, production volumes, information flows, staffing requirements and inventory strategies Calculation tools for value-added and nonvalue added times Visualization for high leverage projects, kaizen events and operating programs
Oracle’s Flow Manufacturing software also offers VSA and VSM support. Flow Manufac-
ERP Systems Supporting Lean Manufacturing in SMEs
turing offers Graphical Line Designer to aid in visualization of current value streams and to create maps which can be redesigned into balanced line operations and in ways that eliminate waste and redundancy. Graphical Line Designer defines standard processes and associates them with configurable models, product families, and networks or processes. Each individual process is linked to a set of primary and feeder processes and can be identified as rework where relevant. Overall, users are able to focus on events important to quality improvement initiatives (Oracle, 2006).
Toolset # 4: Demand Smoothing The focus of demand smoothing tools are to accumulate forecasts and customer demands and then provide graphical analyses of requirements for daily production. This functionality is a natural fit for ERP systems since forecasting is often based on historical data. Most demand smoothing tools also consider important resources needed to complete the recommended production (Nakashima, 2000). An ERP package that offers demand smoothing is Manugistics. It specializes in support for dynamic pricing techniques. Additionally, it can be integrated with production planning software modules. Manugistics smoothes demand through dynamic pricing. So if demand exceeds production capabilities, pricing is raised and vice versa. This helps manufacturers proactively stabilize demand so production rates more closely mirror sales (Bragg, 2004). Other ERP vendors offering demand smoothing include Oracle, SAP, and IFS.
Toolset # 5: Engineering Change Orders Engineering change orders can be a huge disruption to workflow. Workflow or other similar technology to communicate engineering changes to the production line can be implemented using an ERP system to more easily and more immediately signal the factory floor. This can be important since
changes can impact the work process as much as the bill-of materials (Nakashima, B. 2000). Many ERP systems have begun to implement this feature.
Toolset # 6: Kanban Control and Management Simple flow manufacturing, particularly in SMEs, often will not require enterprise software. Many successful manufacturers use kanban tags instead of work orders, however, this works best only in situations where production is steady and product demand stable with few changes (Turbide, 2005). As production becomes more complex with greater variation, changes, and customers, specialty kanban will be required. Special situations such as promotional peaks, seasonal demand, long lead-time supplies, and highly customized orders can lead to difficulties in the kanban environment (Bragg, 2004). As demand changes, manual kanbans can become more difficult to manage thus changes harder to track (Garwood, 2002). According to Bragg (2004) another difficulty associated with manual kanban relates to loss. He reports that up to 1% of manual kanbans gets lost every day (Bragg, 2004). Manual kanbans are also hard to track and often labor intensive (Bragg, 2004). Simplicity can come at a cost. ERP software vendors have created support modules that address many kanban control and management problems. A promising solution has been to offer electronic kanbans that continue to allow plant floor operations to capitalize on the advantages of kanban but at the same time provide access to a greater range of organizational data and intelligence to optimize the use of resources. This enables smarter scheduling together with the benefits of kanban. A much larger segment of the manufacturing community can benefit (Turbide, 2005). Additionally, ERP systems can provide additional analysis tools which can help recalculate the size and number of kanban bins required. This can be done on-the-fly and used to alert key personnel of expected changes. Manual kanbans
1131
ERP Systems Supporting Lean Manufacturing in SMEs
can then be changed. Of course, if electronic kanbans are being used, then the changes can be done automatically (Nakashima, 2000). Pelion’s ERP software offers a variety of functions to extract updated requirements from the organizational data repositories and uses this information to send out new kanban levels to the plant floor. This can be done on a continual or at least regular basis to avoid production overor-under estimates. These tools allow for proper sizing and timing of material deliveries together with electronic release signals. This ensures vendors provide timely delivery of needed supplies and parts (Garwood, 2002). Oracle also provides kanban functionality in its Flow Manufacturing software. Oracle’s approach has been to support all kanban transactions with its Mobile Supply Chain Applications (MSCA) module. Kanban replenishment becomes more sophisticated with access to enterprise data. Therefore kanban can be based upon anticipated volumes based on historical data and the optimal number of kanbans and their sizes can be calculated based on anticipated and actual demand. This also protects against errors that may be attributable to varying demands (Oracle, 2006).
Toolset # 7: Line Design / Sequencing In multiple product production lines, minimizing changeovers can become a difficult and tricky problem. In production lines with the capability to simultaneously produce multiple parts, it becomes necessary to not only minimize changeovers, but also sequence them in a logical order that maximizes overall production (Bragg, 2004). Some ERP vendors now offer Line Design and Sequencing toolsets adapted from Lean manufacturing. These software tools are designed so flow logic can be used to maximize specific customer-ordered variations of offered product mixes and the key components required to meet the demand. This is particularly true when products are configure-to-order (Nakashima, 2000).
1132
In a configure-to-order, products are assembled from modules based on pre-constructed components that are then finished and configured to meet customer demand (Bradford, Mayfield & Toney, 2001). A line design tool helps to synchronize related work activities and ensure that raw material consumption is considered and planned in a way to reduce or eliminate queue time, inventory, and work-in-process. This helps support continuous improvement efforts by ensuring the facility has a continually balanced manufacturing line (Bradford, Mayfield & Toney, 2001). Oracle offers a tool called Sequencing Rules to perform these functions. Oracle uses ILOG’s optimization engine in conjunction with enterprise data to develop optimal production sequences. ILOG is a third-party software vendor specializing in resource optimization, resource allocation and network management among other areas. Oracle’s Sequencing Rules toolset offers a variety of features relating to Lean (Bragg, 2004). Among these are: • •
•
Groupings: Allows products with common attributes to be placed together Spacing between Products: Ensures hard-to-manufacture variants not bunched in production sequence Required / Disallowed Transitions: Transitions are monitored and tracked so changeover-related costs can be reduced.
In another ERP package from SYSPRO (ARC Strategies, 2007), a “rules based” Product Configurator software module has been developed to specifically address the needs of assemble-to-order or engineer-to-order SMEs. The main premise is that products comprised of varying combinations of predefined components, subassemblies, and operations can be configured from within the Sales Order, Quotation/Estimating and/or WorkIn-Process modules. This gives a salesperson or estimator the ability to immediately configure a product based on a client’s answer to questions.
ERP Systems Supporting Lean Manufacturing in SMEs
The Configurator software can be used to select a number of pre-finished products, to create a sales kit, or a standard bill of material to request the manufacture of a custom product (CTS, 2008). Backflush is the deduction from inventory records of the component parts used in an assembly or subassembly by exploding the bill of materials by the production count of assemblies produced (Cox and Blackstone, 2008). In other words, inventory can be backflushed to remove raw material from inventory, instead adding to finished goods inventory or to inventories of stocked sub assemblies. In Lean manufacturing, backflushing should be a routine transaction for material issues. Activity reporting and inventory level updates can be replaced by the performance of all inventory transactions upon completion of a single unit (Nakashima, 2000). In some plants, the replenishment signal for a bin is generated from a backflush, since this saves the waste of creating Kanban cards and scanning them. Oracle Flow Manufacturing backflushes all the components and performs resource and overhead transactions upon recording the assembly completion. Oracle Flow Manufacturing allows scrapping assemblies and returning from scrap at any operation using either scheduled or unscheduled flow schedules. A scrap transaction will cause all the components through the scrap operation to be backflushed (Oracle, 2006).
Toolset # 8: Orderless Flow Manufacturing Traditional batch production is controlled through work orders which, by Lean standards, include too much waste (Turbide, 2005). Kanban-based flow manufacturing is conducted without work orders – and therefore without the waste associated with work orders. Flow manufacturing is characterized by production lines and/or cells in which work is pulled or moves piece-by-piece through the process and not in batches (Turbide, 2005). This pull based work flow in the production line is
generally achieved by using visual Kanban cards that can be tags, labels, containers, or electronic signals (Turbide, 2005). One of the available ERP systems offering support in Orderless Manufacturing is TTW’s WinMan software. It uses an empty container as a signal to trigger internal replenishment. These empty containers further indicate completion of a finished product and backflushing of the component material. In order to help complete the transaction, TTW has designed a set of internal Kanban cards which can be bar code scanned. (WinMan, 2003). Oracle Flow Manufacturing records completions of assemblies without having to create work orders (Oracle, 2006).
ErP Vendors Offering Lean support As is evident from the list of Lean-enabled features now being supported in ERP systems, a number of vendors have begun to enter the marketplace. Table 2 provides a list of some of these vendors and their primary Lean-enabled software modules. Three of these vendors are then explored in more detail.
Vendor #1: Oracle Oracle is considered a leader in Lean-manufacturing implementation within its ERP system. According its research, Oracle had over 100 companies using its Flow Manufacturing module, and as many as four times that amount used its Kanban software (Bragg, 2004). Oracle Flow Manufacturing has been implemented as part of the Oracle E-Business Suite (Oracle, 2006). Oracle considers Flow Manufacturing as crucial to its e-commerce strategy. Because of this, Flow Manufacturing has been promoted as effective in the reduction of product cycle times, inventories, and process complexity. In addition, Oracle claims the software will simplify production, and help meet production demand at affordable prices (Kent, 2002). Of course, none of these features
1133
ERP Systems Supporting Lean Manufacturing in SMEs
Table 2. Lean enabled ERP systems Lean-Enabled Software
Vendor
Website
Alliance/MFG
Exact Software
www.alliancemfg.com
American Software Enterprise Version 3
American Software
www.amsoftware.com
Demand Point
Pelion Systems
www.pelionsystems.com
E2
Shoptech
www.shoptech.com
e-Intelliprise
American Software, Inc
www.amsoftware.com/marketing/intelliprise-home.asp
Fourth Shift
Softbrands Manufacturing Solutions
www.fourthshift.com
Global Shop
Global Shop Solutions
www.globalshopsolutions.com/product.htm
IFS Applications
IFS
www.ifsworld.com
Infor ERP VISUAL
Infor Global Solutions, Inc.
www.infor.com
Made2Manage
Consona
www.made2manage.com
MFG/PRO
QAD
www.qad.com
Manugistics
JAD Software Group, Inc.
www.jda.com
MISys
MISys
www.misysinc.com/mi2kover.htm
Oracle E –Business Suite
Oracle
www.oracle.com
PeopleSoft Enterprise
Oracle
www.oracle.com/applications/peoplesoft-enterprise.html
Seradex ERP Solutions
Seradex
www.seradex.com/ERP/Lean_Manufacturing_ERP.php (Seradex, 2007)
Sage ERP X3
Adonix
www.adonix.com
SYSPRO Enterprise
SYSPRO
www.syspro.com
Ultriva
Ultriva (ebots)
www.ultriva.com
Vista
Epicor
www.epicor.com
WinMan
TTW
www.winmanusa.com
xApp
SAP
www.sap.com/solutions
would be possible without the synergy offered by coupling Lean with the wide array of ERP enterprise data. Additionally, Oracle’s software allows users to create simulations of expected Lean environment changes. Several simulation tools allow experimentation with line balancing, optimized product flows, and Heijunka sequencing (Wheatley, 2007). Oracle’s software not only supports kanban from assembly to the supplier base, it also includes tools to provide users with the capability to view historic demand and optimize the size and number of system kanban cards (Wheatley, 2007). Various researchers and practitioners have reported a great deal of success with Oracle’s Lean enabled ERP 1134
systems (Lee & Adam, 1986; WinMan, 2006). Table 3 provides a list of several Lean-enabled tools offered by Oracle (Oracle, 2006):
Vendor #2: TTW TTW’s Windows ERP product, WinMan is particularly suited for use by SMEs. This product features an integrated manufacturing management system designed specifically for the needs of small to midsized enterprises. Since it manages both manufacturing and business operations, including bill of materials, purchasing, material requirements planning (MRP), inventory planning and control, and master production scheduling
ERP Systems Supporting Lean Manufacturing in SMEs
Table 3. Lean-enabled tools from Oracle Feature
Description
Value Stream Mapping
Identifies opportunities for improvement
Value Stream Analysis
Visualizes opportunities for improvement
Line Design and Balancing
Supports mixed model production of standard or configured products
Just-in-Time Procurement
Pull-based, kanban replenishment chain supported to improve inventory turns. Also, supports synchronized component replenishment for configured or build-to-order components.
Electronic Work Methods
Lean manufacturing execution workstation supported for operators thus enabling move toward paperless shop floor.
Backflushing
Scrap transaction generated automatically causing all components to be backflushed
Kanban
Supports kanban transactions with Mobile Supply Chain Applications (MSCA) module
Orderless Flow Manufacturing
Allows recording of completions of assemblies without having to create work orders
Sequencing and Scheduling Capability
Produces directly to customer order
(MPS), it can help control all aspects of an SME (Global Shop Solution, 2008). TTW reports several success stories related to WinMan implementations including Lantech being about to grow its share of the stretch-wrap machinery market from 35 percent to 50 percent. Capricorn Cars Parts increased its inventory turnover by 188% and increased its parts portfolio from 1,800 to 15,000 while experiencing a marginal increase in overhead costs (WinMan, Athena Controls 2008). Since many SMEs utilize the benefits offered with Lean practices, WinMan provides a number
of related tools. Table 4 illustrates (CTS, 2008; WinMan, 2006).
Vendor #3: Pelion Systems Unlike other ERP vendors, Pelion Systems’ suite of products is developed as third party software to augment existing ERP applications. The primary objective of Pelion’s software is to reduce excessive lead time and lot size obstacles through the use of Lean manufacturing practices (Garwood, 2002). Pelion calls its primary software Demand Flow Technology. The approach is to combine a
Table 4. Lean-enabled tools from TTW’s WinMan Feature
Description
Support for Just-in-Time
External Kanban generates bar coded kanban cards for pre-selected suppliers and allows manufacturers to pull inventory on demand from the shop floor
Demand Driven Manufacturing
Eliminates non-value added activities and reduces inventory by using pull manufacturing techniques
Push/Pull Flexibility
Ability to process back to back orders in both purchasing and manufacturing and simulates the pull concept while still utilizing traditional push purchase orders and manufacture orders
Line Design and Sequencing.
Product Configuration workbench allows selection of various options on the fly during the sales order process. Guided by pre-selected logic. May offer inclusion and/or exclusion rules.
Orderless Flow Manufacturing
Uses an empty container as a signal to trigger internal replenishment. Empty containers may signal completion of finished product and trigger backflushing of the component material
Backflushing
Backflushing is deduction from inventory records of component parts used in an assembly or subassembly
1135
ERP Systems Supporting Lean Manufacturing in SMEs
kanban execution system with value stream mapping capability. Full integration occurs within the line design and balancing tools inside Pelion’s Lean module, Collaborative Flow Manufacturing (CFM). This further offers support for kanban size determination in flow lines. Pelion provides functionality that gives users the capability to work on four time horizons simultaneously (Bragg, 2004). For instance a production planner can use an annual time horizon to develop a plant layout, a quarterly horizon to implement new product introductions together with expected kanban sizes and takt time, monthly horizons for work schedules and other plans, and finally daily horizons to monitor and manage work output (Bragg, 2004). Bragg (2004) reports a number of success stories associated with Pelion software. For instance, Husqvarna reduced its inventory by more than a million dollars and was able to improve order fulfillment within a predetermined time frame from less than sixty percent to more than ninety-five percent. Brooks Automation reported a twenty million dollar inventory reduction. Nissan Forklift reported a twenty percent reduction in final assembly direct labor, a forty-seven percent reduction in inventory and nearly five and half million in annual savings. Table 5 provides a look at the Lean-enabled toolsets provided by Pelion Systems.
FUtUrE trENDs The near saturation of ERP sales in large organizations has encouraged software vendors to seek additional venues for development and growth. SMEs provide a natural fit for a new focus of ERP software development. A problem in marketing software within this arena has been the widespread focus on Lean manufacturing used by many SMEs. This problem becomes complicated since most ERP software is based on push methodologies and Lean practice revolves around pull methodologies. Software vendors have addressed this through a variety of new software modules and ERP add-ons. This in turn has encouraged additional manufacturers to adopt Lean practices and experience the synergy offered by using Lean-enabled ERP software. However, manufacturers face a variety of issues when making the transition to Lean (Michel, 2002). Although the primary question, can ERP software be used to support Lean has been answered by a variety of organizations with their software solutions (See table), many Lean purists believe only visual signals and shop floor implementation provide a true implementation of this philosophy and the added functionality of ERP flies in the face of basic Lean premises. Others believe ERP systems provide additional transactional foundations and historical data collection and analyses
Table 5. Lean-enabled toolsets from Pelion Systems Feature Value Stream Analysis
Description EasyVSM workbench determines key breakthrough improvement opportunities, helps distinguish between valueadded and non value-added activities.
Value Stream Mapping
Delivers visual roadmap of Kaizen and other improvements, increases visibility and aids in communication
Lean Engineering:
Defines flow as a means of driving Lean process layouts and aids in continuous improvement effects
Lean Material Flow
Aids in material flow design, strategic inventory decisions, stock out reductions, pull signal methods in all parts of production, inventory turnover. Supports rapid analysis of dynamic material requirements
Line Design and Sequencing
Matches product mix with demand volume (both actual and forecast) with factory resources. Provides alerts to potential resource allocation bottlenecks
Kanban Control and Management
Enables real-time coordination on dynamic demand-pull requirements; tracks fulfillment performance and ensures production goals met in effective, timely fashion
1136
ERP Systems Supporting Lean Manufacturing in SMEs
that can further improve Lean practices. Added then are benefits associated with having business software integrated and tied to the same database. ERP vendors are competing with best-of-breed software vendors to understand and offer solutions to support best practice implementations of Lean production (Michel, 2002). A Lean-enabled ERP implementation must necessarily include a variety of new modules, procedures, practices and toolsets to add new functions to existing ERP software (Nakashima, 2000). These modules often include: value stream analysis, value stream mapping, lean engineering, lean material flow, line design and sequencing, backwash capability, kanban management, and others. A variety of software packages and functionality have emerged. Table 6 adapted from Halgeri (2008) provides a glimpse at these cross referenced with the three vendor solutions this article explored in more depth. Certainly the future of ERP systems will include more Lean-enabled tools and continue to capitalize on the synergy derived from the using best Lean practices within an environment of automatic data collection and access. SMEs will continue to derive benefits from appropriate scaled versions of ERP systems without losing their edge in terms of nimbleness and ability to react to their customers’ needs. Larger organizations will gain greater access to the methods developed in smaller organizations and will find closer links and better communication along the entire supply chain
possible. More ERP vendors will add Lean tools and continue to improve those already in use.
cONcLUsION Overall, this research suggests Lean manufacturing has been successfully integrated with ERP software as a best-of-breed approach useful to many SMEs and other manufacturers. With global competition heating up, small and medium enterprises (SMEs), more than ever, must have access to superior software systems and tools. This article has described how ERP (Enterprise Resource Planning) and Lean manufacturing have been implemented in various ways. In early incarnations, ERP systems were considered a hindrance to Lean manufacturing efforts and were criticized for encouraging large inventories and slower production. In more recent years, it has become apparent that SMEs must seek the best of both worlds and use the benefits of ERP and retain the nimbleness provided by Lean. Linking ERP and Lean methods can lead to competitive advantage as demonstrated in a number of software solutions offered by vendors such as Oracle TTW, and Pelion. These tools represent the future of SMEs and their use of ERP systems.
Table 6. Software comparison Lean Initiative
Oracle Flow Manufacturing (Turbide, 2005)
TTW’s Winman (Wheatley, 2007)
Pelion systems Demad Flow (Bragg, 2004; Pelion, 2008)
Analysis Tools
√
-
√
Mapping Tools
√
-
√
JIT Procurement Support
√
√
√
Kanban Control
√
√
√
Sequencing
√
√
√
Demand Smoothing
√
√
√
1137
ERP Systems Supporting Lean Manufacturing in SMEs
rEFErENcEs Aladwani, A. M. (2001). Change management strategies for successful ERP implementation. Business Process Management Journal, 7(3), 266–275. doi:10.1108/14637150110392764 Bartholomew, D. (1999). Lean vs. ERP. Industry Week, 248, 1–6. Bartholomew, D. (2003). ERP: Learning to be Lean. Industry Week. Retrieved July 19, 2008, from http://www.industryweek.com/ReadArticle. aspx?Article ID=2289. Bonvik, A. M., Couch, C. E., & Gershwin, S. B. (1997). A comparison of production-line control mechanisms. International Journal of Production Research, 35(3), 789–804. doi:10.1080/002075497195713 Bradford, M., Mayfield, T., & Toney, C. (2001) Does ERP fit in a Lean world? Strategic Finance, May, 28-34. Bragg, S. (2004) Software solutions taking Lean manufacturing to the next level. Retrieved July 20, 2008, from http://www.oracle.com/lean/ arc_leanmfg.pdf Burton, T. T., & Boeder, S. M. (2003). The Lean extended enterprise: Moving beyond the four walls to value stream excellence. Fort Lauderdale, FL: J. Ross Publishing. Cheng, T. C. E., & Podolsky, S. (1993). Just-intime manufacturing: An introduction (2nd ed.). London: Chapman & Hall. Cox, J., & Blackstone, J. (Eds.). (2008). APICS dictionary (12th ed.). Chicago, IL: APICS Educational Society for Resource Manage. CTS. (2008). Manufacturing software reviews. Retrieved July 20, 2008 from http://www.ctsguides.com/manufacturing.asp
1138
Deep, A., Dani, S., & Burns, N. (2008). Investigating factors affecting ERP selection in madeto-order SME sector. Journal of Manufacturing Technology Management, 19(4), 430–446. doi:10.1108/17410380810869905 Gable, G., & Stewart, G. (1999). SAP R/3 implementation issues for small to medium enterprises. In W.D. Haseman & D.L. Nazareth (Eds.), Proceedings of the 5th Americas Conference on Information Systems (pp. 779-781), Milwaukee, WI. Garwood, D. (2002). ERP or flow manufacturing? Collaboration, not separation. R.D. Garwood, Inc. Retrieved July 22, 2008 from http://www. rdgarwood.com/archive/hot56.asp. Global Shop Solutions. (2008). Global solutions products. Retrieved August 30, 2008 from http:// www.globalshopsolutions.com/erp-software/ default.asp. Greene, A. (2004). Toyota production systems: Lean goes mainstream. Managing Automation (April). Retrieved July 20, 2008, from http://www. managingautomation.com/maonline/magazine/ read/view/Toyota_Production_Systems__Lean_ Goes_Mainstream_3874. Halgeri, P., Pei, Z. J., Iyer, K. S., Bishop, K., & Shehadeh, A. (2008). ERP systems supporting Lean manufacturing: A literature review. 2008 International Manufacturing Science & Engineering Conference (MSEC), Evanston, IL, USA. Jacobs, R. (2007). Enterprise resource planning (ERP) - A brief history. Journal of Operations Management, 25(2), 357–363. doi:10.1016/j. jom.2006.11.005 Kent, J. F. (2002). An examination of traditional ERP and Lean manufacturing production control methods with a view of flow manufacturing software as an alternative. MS thesis, University of Oregon.
ERP Systems Supporting Lean Manufacturing in SMEs
Koch, C. (2008). ABC: An introduction to ERP. CIO. Retrieved March 18, 2008, from http:// www.cio.com/article/40323/ABC_An_Introduction_to_ERP/1 Koh, L., & Simpson, M. (2005). Change and uncertainty in SME manufacturing environments using ERP. Journal of Manufacturing Technology Management, 16(6), 629–653. doi:10.1108/17410380510609483 Lee, T. S., & Adam, E. E. Jr. (1986). Forecasting error evaluation in material requirements planning (MRP) production-inventory systems. Management Science, 32(9), 1186–1205. doi:10.1287/ mnsc.32.9.1186 Li, Y., Liao, X. W., & Lei, H. Z. (2006). A knowledge management system for ERP implementation. Systems Research and Behavioral Science, 23(2), 157–168. doi:10.1002/sres.751 McManus, H. L., & Richard, L. M. (2002). Value Stream analysis and mapping for product development. In Proceedings of 23rd ICAS Congress (pp. 6103.1-6103.10). Toronto, Canada. Mekong Capital. (2004). Introduction to Lean manufacturing for Vietnam. Retrieved August 30, 2008 from http://www.mekongcapital.com/ Introduction%20to%20Lean%20Manufacturing%20-%20English.pdf Metzger, B. (2008). Linking Lean and ERP systems together for sustained advantage [White paper of TriMin Systems, Inc.]. Retrieved August 30, 2008 from http://www.triminmfg.com/images/ KnowledgeBase/Metzger.pdf Michel, R. (2002). Multiple paths to Lean: Detector Electronics, Norlen turn to specialized Lean manufacturing solutions. Manufacturing Business Technology. Retrieved July 20, 2008 from http://www.mbtmag.com/article/CA254538. html?q=Lean+ERP+software
Miller, G. J. (2002). Lean and ERP: Can they co-exist? Retrieved July 19, 2008 from http:// facilitatorgroup.net/pdf/LeanERPCoExist.pdf Nagendra, P. B., & Das, S. K. (1999). MRP/SFX: A kanban-oriented shop floor extension to MRP. Production Planning and Control, 10(3), 207–218. doi:10.1080/095372899233172 Nakashima, B. (2000). Lean and ERP: Friend or foe? Advanced Manufacturing (September) Retrieved July 20, 2008 from http://www.advancedmanufacturing.com/index.php?option=com_stati cxt&staticfile=informationtech.htm&Itemid=44 Oracle (2006). Oracle Flow Manufacturing datasheet. Retrieved July 19, 2008 from ttp:// www.oracle.com/applications/manufacturing/ flow-manufacturing-data-sheet.pdf. Paez, O., Dewees, J., Genaidy, A., Tuncel, S., Karwowski, W., & Zurada, J. (2004). The Lean manufacturing enterprise: An emerging sociotechnological system integration. Human Factors and Ergonomics in Manufacturing, 14(3), 285–306. doi:10.1002/hfm.10067 Pelion Systems. (2008). Pelion Systems Solutions. Retrieved July 22, 2008 from http://www. pelionsystems.com/solutions.asp Piszczalski, M. (2000). Lean vs. information systems. Automotive Manufacturing & Production, 112(8), 26–28. Poppendieck, M. (2002) Principles of Lean thinking (pp. 1-7). Poppendieck LLC. Rashid, M. A., Hossain, L., & Patrick, J. D. (2002). The evolution of ERP systems: A historical perspective. Hershey, PA: Idea Group. Seradex. (2007). Lean Manufacturing - Seradex ERP Solutions. Retrieved September 3, 2008 from http://www.seradex.com/ERP/Lean_Manufacturing_ERP.php
1139
ERP Systems Supporting Lean Manufacturing in SMEs
Spearman, M. L., Hopp, W. J., & Woodruff, D. L. (1999). A hierarchical control architecture for constant work-in-process (CONWIP). Journal of Manufacturing and Operations Management, 2(3), 147–171. Spearman, M. L., & Zazanis, M. A. (1992). Push and pull production systems: Issues and comparisons. Operations Research, 40(3), 521–532. doi:10.1287/opre.40.3.521 Steger-Jensen, K., & Hvolby, H. (2008). Review of an ERP System Supporting Lean Manufacturing. In T. Koch (Ed.), International Federation for Information Processing (IFIP), Volume 257, Lean Business Systems and Beyond (pp. 67-74). Boston, MA: Springer. Strategies, A. R. C. (2007). The when, why and how of ERP support for Lean. SYSPRO, 1-22. Retrieved July 18, 2008 from http://www.syspro.com Strategosinc (2008). Origins & history Lean Manufacturing. Retrieved July 18, 2008 from http://www.strategosinc.com/just_in_time.htm
Turbide, D. A. (2005). Five ways ERP can help you implement Lean. EPICOR Software. Retrieved July 16, 2008 from http://whitepapers.zdnet.com/ abstract.aspx?docid=351964. Wheatley, M. (2007). ERP is needed to sustain the gains of Lean programs. Manufacturing Business Technology. Retrieved July 18, 2008 from http:// www.mbtmag.com/article/CA6450623.html WinMan. (2003). WinMan and Lean systems - A white paper on integrating WinMan with Lean systems. Retrieved August 20, 2008 from http:// www.winmanusa.com/PDF/WinMan_Lean_Systems.pdf WinMan. (2006). Athena Controls. Retrieved August 20, 2008 from http://www.winmanusa. com/success.asp Womack, J. P., & Jones, D. T. (1996). Beyond Toyota: How to root out waste and pursue perfection. Harvard Business Review, (SeptemberOctober): 140–158.
Team, Q. S. G. (2006). Manufacturing can actively manage their value streams with Pelion’s next generation EasyVSM tool. Retrieved July 20, 2008 from http://www.qsoftguide.com/cm/index. php?blog=2&p=254&more=1&c=1&tb=1&pb=1
This work was previously published in Enterprise Information Systems for Business Integration in SMEs: Technological, Organizational, and Social Dimensions, edited by Maria Manuela Cruz-Cunha, pp. 56-75, copyright 2010 by Business Science Reference (an imprint of IGI Global).
1140
1141
Chapter 4.17
Specifying Software Models with Organizational Styles Manuel Kolp Université Catholique de Louvain Place des Doyens, Belguim Yves Wautelet Université Catholique de Louvain Place des Doyens, Belguim Stéphane Faulkner University of Namur Rempart de la Vierge, Belgium
AbstrAct
INtrODUctION
Organizational Modeling is concerned with analyzing and understanding the organizational context within which a software system will eventually function. This chapter proposes organizational patterns motivated by organizational theories intended to facilitate the construction of organizational models. These patterns are defined from real world organizational settings, modeled in i* and formalized using the Formal Tropos language. Additionally, the chapter evaluates the proposed patterns using desirable qualities such as coordinability and predictability. The research is conducted in the context of Tropos, a comprehensive software system development methodology.
Analyzing the organizational and intentional context within which a software system will eventually operate has been recognized as an important element of the organizational modeling process also called early requirements engineering (see e.g., (Anton 1996, Dardenne, van Lamsweerde & Fickas 1993, Yu 1995)). Such models are founded on primitive concepts such as those of actor and goal. This chapter focuses on the definition of a set of organizational patterns that can be used as building blocks for constructing such models. Our proposal is based on concepts adopted from organization theory and strategic alliances literature. Throughout the paper, we use i* (Yu 1995) as the modeling framework in terms of which the proposed patterns are presented
DOI: 10.4018/978-1-60566-146-9.ch006
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Specifying Software Models with Organizational Styles
and accounted for. The research reported in this paper is being conducted within the context of the Tropos project (Giorgini, Kolp, Mylopoulos & Pistore 2004, Giorgini, Kolp, Mylopoulos & Castro 2005), whose aim is to construct and validate a software development methodology for agent-based software systems. The methodology adopts ideas from multi-agent system technologies, mostly to define the implementation phase of our methodology. It also adopts ideas from Requirements Engineering, where actors and goals have been used heavily for early requirements analysis. The project is founded on that actors and goals are used as fundamental concepts for modeling and analysis during all phases of software development, not just early requirements, or implementation. More details about Tropos can be found in (Giorgini et al. 2005). The present work continues the research in progress about social abstractions for the Tropos methodology. In (Kolp, Giorgini & Mylopoulos 2002a), we have detailed a social ontology for Tropos to consider information systems as social structures all along the development life cycle. In (Giorgini, Kolp & Mylopoulos 2002, Kolp, Giorgini & Mylopoulos 2002b, Kolp, Giorgini & Mylopoulos 2006), we have described how to use this Tropos social ontology to design multi-agent systems architectures, notably for e-business applications (Kolp, Do & Faulkner 2004). As a matter of fact, multi-agent systems can be considered structured societies of coordinated autonomous agents. In the present paper, which is a extended and revised version of (Kolp, Giorgini & Mylopoulos 2003), we emphasize the use of organizational patterns based on organization theory an strategic alliances for early requirements analysis, with the concern of modeling the organizational setting for a system-to-be in terms of abstractions that could better match its operational environment (e.g., an enterprise, a corporate alliance, . . .) The paper is organized as follows. Section 2 describes organizational and strategic alliance theories, focusing on the internal and external
1142
structure of an organization. Section 3 details two organizational patterns – the structure-in-5 and the joint venture – based on real world examples of organizations. These patterns are modeled in terms of social and intentional concepts using the i* framework and the Formal Tropos specification language. Section 4 identifies a set of desirable non-functional requirements for evaluating these patterns and presents a framework to select a pattern with respect to these identified requirements. Section 5 overviews the Tropos methodology. Finally, Section 6 summarizes the contributions of the chapter and overviews related work.
strUctUrING OrGANIZAtIONs Organizational structures are primarily studied by Organization Theory (e.g., (Mintzberg 1992, Scott 1998, Yoshino & Rangan 1995)), that describes the structure and design of an organization and Strategic Alliances (e.g., (Dussauge & Garrette 1999, Gomes-Casseres 1996, Morabito, Sack & Bhate 1999, Segil 1996)), that model the strategic collaborations of independent organizational stakeholders who have agreed to pursue a set of agreed upon business goals. Both disciplines aim to identify and study organizational patterns that describe a system at a macroscopic level in terms of a manageable number of subsystems, components and modules inter-related through dependencies. In this chapter, we are interested to identify, formalize and apply, for organizational modeling, patterns that have been already well-understood and precisely defined in organizational theories. Our purpose is not to categorize them exhaustively nor to study them on a managerial point of view. The following sections will thus only insist on patterns that have been found, due to their nature, interesting candidates also considering the fact that they have been studied in great detail in the organizational literature and presented as fully formed patterns.
Specifying Software Models with Organizational Styles
Organization theory “An organization is a consciously coordinated social entity, with a relatively identifiable boundary, that functions on a relatively continuous basis to achieve a common goal or a set of goals” (Morabito et al. 1999). Organization theory is the discipline that studies both structure and design in such social entities. Structure deals with the descriptive aspects while design refers to the prescriptive aspects of a social entity. Organization theory describes how practical organizations are actually structured, offers suggestions on how new ones can be constructed, and how old ones can change to improve effectiveness. To this end, since Adam Smith, schools of organization theory have proposed models and patterns to try to find and formalize recurring organizational structures and behaviors. In the following, we briefly present organizational patterns identified in Organization Theory. The structure-in-5 will be studied in detail in Section 3. The Structure-in-5. An organization can be considered an aggregate of five substructures, as proposed by Minztberg (Mintzberg 1992). At the base level sits the Operational Core which carries out the basic tasks and procedures directly linked to the production of products and services (acquisition of inputs, transformation of inputs into outputs, distribution of outputs). At the top lies the Strategic Apex which makes executive decisions ensuring that the organization fulfill its mission in an effective way and defines the overall strategy of the organization in its environment. The Middle Line establishes a hierarchy of authority between the Strategic Apex and the Operational Core. It consists of managers responsible for supervising and coordinating the activities of the Operational Core. The Technostructure and the Support are separated from the main line of authority and influence the operating core only indirectly. The Technostructure serves the organization by making the work of others more effective, typically by
standardizing work processes, outputs, and skills. It is also in charge of applying analytical procedures to adapt the organization to its operational environment. The Support provides specialized services, at various levels of the hierarchy, outside the basic operating workflow (e.g., legal counsel, R&D, payroll, cafeteria). We describe and model examples of structures-in-5 in Section 3. The pyramid pattern is the well-know hierarchical authority structure. Actors at lower levels depend on those at higher levels. The crucial mechanism is the direct supervision from the Apex. Managers and supervisors at intermediate levels only route strategic decisions and authority from the Apex to the operating (low) level. They can coordinate behaviors or take decisions by their own, but only at a local level. The chain of values merges, backward or forward, several actors engaged in achieving or realizing related goals or tasks at different stages of a supply or production process. Participants who act as intermediaries, add value at each step of the chain. For instance, for the domain of goods distribution, providers are expected to supply quality products, wholesalers are responsible for ensuring their massive exposure, while retailers take care of the direct delivery to the consumers. The matrix proposes a multiple command structure: vertical and horizontal channels of information and authority operate simultaneously. The principle of unity of command is set aside, and competing bases of authority are allowed to jointly govern the workflow. The vertical lines are typically those of functional departments that operate as ”home bases” for all participants, the horizontal lines represents project groups or geographical arenas where managers combine and coordinate the services of the functional specialists around particular projects or areas. The bidding pattern involves competitivity mechanisms, and actors behave as if they were taking part in an auction. An auctioneer actor runs the show, advertises the auction issued by the auction issuer, receives bids from bidder actors
1143
Specifying Software Models with Organizational Styles
and ensures communication and feedback with the auction issuer who is responsible for issuing the bidding.
strategic Alliances A strategic alliance links specific facets of two or more organizations. At its core, this structure is a trading partnership that enhances the effectiveness of the competitive strategies of the participant organizations by providing for the mutually beneficial trade of technologies, skills, or products based upon them. An alliance can take a variety of forms, ranging from arm’s-length contracts to joint ventures, from multinational corporations to university spin-offs, from franchises to equity arrangements. Varied interpretations of the term exist, but a strategic alliance can be defined as possessing simultaneously the following three necessary and sufficient characteristics: •
•
•
The two or more organizations that unite to pursue a set of agreed upon goals remain independent subsequent to the formation of the alliance. The partner organizations share the benefits of the alliances and control over the performance of assigned tasks. The partner organizations contribute on a continuing basis in one or more key strategic areas, e.g., technology, products, and so forth.
In the following, we briefly present organizational patterns identified in Strategic Alliances. The joint venture will be studied in details in Section 3. The joint venture pattern involves agreement between two or more intra-industry partners to obtain the benefits of larger scale, partial investment and lower maintenance costs. A specific joint management actor coordinates tasks and manages the sharing of resources between partner actors. Each partner can manage and control itself on a
1144
local dimension and interact directly with other partners to exchange resources, such as data and knowledge. However, the strategic operation and coordination of such an organization, and its actors on a global dimension, are only ensured by the joint management actor in which the original actors possess equity participations. We describe and model examples of joint ventures in Section 3. The arm’s-length pattern implies agreements between independent and competitive, but partner actors. Partners keep their autonomy and independence but act and put their resources and knowledge together to accomplish precise common goals. No authority is lost, or delegated from one collaborator to another. The hierarchical contracting pattern identifies coordinating mechanisms that combine arm’s-length agreement features with aspects of pyramidal authority. Coordination mechanisms developed for arm’s-length (independent) characteristics involve a variety of negotiators, mediators and observers at different levels handling conditional clauses to monitor and manage possible contingencies, negotiate and resolve conflicts and finally deliberate and take decisions. Hierarchical relationships, from the executive apex to the arm’s-length contractors restrict autonomy and underlie a cooperative venture between the parties. The co-optation pattern involves the incorporation of representatives of external systems into the decision-making or advisory structure and behavior of an initiating organization. By co-opting representatives of external systems, organizations are, in effect, trading confidentiality and authority for resource, knowledge assets and support. The initiating system has to come to terms with the contractors for what is being done on its behalf; and each co-optated actor has to reconcile and adjust its own views with the policy of the system it has to communicate.
Specifying Software Models with Organizational Styles
MODELING OrGANIZAtIONAL PAttErNs We will define an organizational pattern as a metaclass of organizational structures offering a set of design parameters to coordinate the assignment of organizational objectives and processes, thereby affecting how the organization itself functions. Design parameters include, among others, goal and task assignments, standardization, supervision and control dependencies and strategy definitions. This section describes two of the organizational patterns presented in Section 2: the structure-in-5 and the joint-venture.
structure-in-5 To detail and specify the structure-in-5 as an organizational pattern, this section presents two case studies: LDV Bates (Bates 2006) and GMT (GMT 2006). They will serve to propose a model and a semi-formal specification of the structure-in-5. LDV Bates. Agate Ltd is an advertising agency located in Belgium that employs about fifty staff, as detailed in Table 1.
The Direction – four directors responsible for the main aspects of LDV Bates’s Global Strategy (advertising campaigns, creative activities, administration, and finances) – forms the Strategic Apex. The Middle Line, composed of the Campaigns Management staff, is in charge of finding and coordinating advertising campaigns (marketing, sales, edition, graphics, budget, . . .). It is supported in these tasks by the Administration and Accounts and IT and Documentation departments. The Administration and Accounts constitutes the Technostructure handling administrative tasks and policy, paperwork, purchases and budgets. The Support groups the IT and Documentation departments. It defines the IT policy of Agate, provides technical means required for the management of campaigns, and ensures services for system support as well as information retrieval (documentation resources). The Operational Core includes the Graphics and Edition staff in charge of the creative and artistic aspects of realizing campaign (texts, photographs, drawings, layout, design, logos). Figure 1 models LDV Bates in structure-in-5 using the i* strategic dependency model. i* is a
Table 1. Organization of LDV Bates Direction
Edition
1 Campaigns Director
2 Editors
IT
1 Creative Director
4 Copy writers
1 IT manager
1 Administrative Director 1 Finance Director
1 Network administrator Documentation
1 System administrator
1 Media librarian
1 Analyst
Campaigns Management
1 Resource librarian
1 Computer technician
2 Campaign managers
1 Knowledge worker
3 Campaign marketers 1 Editor in Chief
Administration
Accounts
1 Creative Manager
3 Direction assistants
1 Accountant manager
4 Manager Secretaries
1 Credit controller
Graphics
2 Receptionists
2 Accounts clerks
6 Graphic designers
2 Clerks/typists
2 Purchasing assistants
2 Photographers
1 Filing clerk
1145
Specifying Software Models with Organizational Styles
Figure 1. LDV Bates as a Structure-in-5
modeling framework for organizational modeling (Yu 1995), which offers goal-and actor-based notions such as actor, agent, role, position, goal, softgoal, task, resource, belief and different kinds of social dependency between actors. Its strategic dependency model describes the network of social dependencies among actors. It is a graph, where each node represents an actor and each link between two actors indicates that one actor depends on the other for some goal to be attained. A dependency describes an “agreement” (called dependum) between two actors: the depender and the dependee. The depender is the depending actor, and the dependee, the actor who is depended upon. The type of the dependency describes the nature of the agreement. Goal dependencies represent delegation of responsibility for fulfilling a goal; softgoal dependencies are similar to goal dependencies, but their fulfillment cannot be defined precisely (for instance, the appreciation is subjective or fulfillment is obtained only to a given extent); task dependencies are used in situations where the dependee is required to perform a given activity; and resource dependencies require the dependee to provide a resource to the depender. As shown in Figure 1, actors are represented as circles; dependums – goals, softgoals, tasks and resources – are represented as ovals,
1146
clouds, hexagons and rectangles; respectively, and dependencies have the form depender → dependum → dependee. GMT is a company specialized in telecom services in Belgium. Its lines of products and services range from phones & fax, conferencing, line solutions, internet & e-business, mobile solutions, and voice & data management. As shown in Figure 2, the structure of the commercial organization follows the structure-in-5. An Executive Committee constitutes the Strategic Apex. It is responsible for defining the general strategy of the organization. Five chief managers (finances, operations, divisions management, marketing, and R&D) apply the specific aspects of the general strategy in the area of their competence: Finances & Operations is in charge of Budget and Sales Planning & Control, Divisions Management is responsible for Implementing Sales Strategy, and Marketing and R&D define Sales Policy and Technological Policy. The Divisions Management groups managers that coordinate all managerial aspects of product and service sales. It relies on Finance & Operations for handling Planning and Control of products and services, it depends on Marketing for accurate Market Studies and on R&D for Technological Awareness.
Specifying Software Models with Organizational Styles
Figure 2. GMT’s sales organization as a Structure-in-5
The Finances & Operations departments constitute the technostructure in charge of management control (financial and quality audit) and sales planning including scheduling and resource management. The Support involves the staff of Marketing and R&D. Both departments jointly define and support the Sales Policy. The Marketing department coordinates Market Studies (customer positionment and segmentation, pricing, sales incentive, . . .) and provides the Operational Core with Documentation and Promotion services. The R&D staff is responsible for defining the technological policy such as technological awareness services. It also assists Sales people and Consultants with Expertise Support and Technology Training. Finally, the Operational Core groups the Sales people and Line consultants under the supervision and coordination of Divisions Managers. They are in charge of selling products and services to actual and potential customers. Figure 3 abstracts the structures explored in the case studies of Figures 1 and 2 as a Structure-in-5 pattern composed of five actors. The case studies also suggested a number of constraints to supplement the basic pattern:
•
•
•
•
•
The dependencies between the Strategic Apex as depender and the Technostructure, Middle Line and Support as dependees must be of type goal A softgoal dependency models the strategic dependence of the Technostructure, Middle Line and Support on the Strategic Apex The relationships between the Middle Line and Technostructure and Support must be of goal dependencies The Operational Core relies on the Technostructure and Support through task and resource dependencies Only task dependencies are permitted between the Middle Line (as depender or dependee) and the Operational Core (as dependee or depender).
To specify the formal properties of the pattern, we use Formal Tropos (Fuxman, Liu, Mylopoulos, Roveri & Traverso 2004), which extends the primitives of i* with a formal language comparable to that of KAOS (Dardenne et al. 1993). Constraints on i* specifications are thus formalized in a firstorder linear-time temporal logic. Formal Tropos
1147
Specifying Software Models with Organizational Styles
Figure 3. The structure-in-5 pattern
provides three basic types of metaclasses: actor, dependency, and entity (Giorgini, Kolp & Mylopoulos 2002). The attributes of a Formal Tropos class denote relationships among different objects being modeled. Metaclasses Actor:= Actor name[attributes] [creation-properties] [invar-properties][actor-goal] With subclasses: Agent(with attributes occupies: Position, play: Role) Position(with attributes cover: Role) Role Dependency:= Dependency name type mode Depender name Dependee name [attributes] [creation-properties] [invar-properties] [fulfillproperties] Entity:=Entity name [attribute] [creation-properties][invar-properties] Actor-Goal:= (Goal|Softgoal) name mode FulFillment(actor-fulfill-property) Classes: Classes are instances of Metaclasses. In Formal Tropos, constraints on the lifetime of the (meta)class instances are given in a firstorder linear-time temporal logic (see (Fuxman et
1148
al. 2004) for more details). Special predicates can appear in the temporal logic formulas: predicate JustCreated(x) holds in a state if element x exists in this state but not in the previous one; predicate Fulfilled(x) holds if x has been fulfilled; and predicate JustFulfilled(x) holds if Fulfilled(x) holds in this state, but not in the previous one. In the following, we only present some specifications for the Strategic Management and Operational Management dependencies. Actor StrategicApex Actor MiddleLine Actor Support Actor Technostructure Actor OperationalCore Dependency StrategicManagement Type SoftGoal Depender te: Technostructure, ml: MiddleLine, su: Support Dependee sa: StrategicApex Invariant ∀dep: Dependency (JustCreated(dep) → Consistent(self, dep)) ∀ag: Actor - Goal (JustCreated(ag) → Consistent(self, ag)) Fulfillment
Specifying Software Models with Organizational Styles
∀dep: Dependency (dep.type = goal ∧ dep.depender = sa ∧ (dep.dependee = te dep.dependee = ml dep.dependee = su)) ∧ Fulfilled(self) → ♦Fulfilled(dep)
Core) that implement (ImplementedBy) the OperationalManagaemnt goal.] In addition, the following structural (global) properties must be satisfied for the Structure-in-5 pattern:
[Invariant properties specify, respectively, that the strategic management softgoal must be consistent with any other dependency of the organization and with any other goal of the actors in the organization. The predicate Consistent depends on the particular organization we are considering and it is specified in terms of goals’ properties to be satisfied. The fulfillment of the dependency necessarily implies that the goal dependencies between the Middle Line, the Technostructure, and the Support as dependees, and the Strategic Apex as depender have been achieved some time in the past]
•
Dependency OperationalManagement Type Goal Mode achieve Depender sa: StrategicApex Dependee ml: MiddleLine Invariant Consistent(self, StrategicM anagement) ∃ c: Coordination (c.type = task ∧ c.dependee = ml ∧ c.depender = OperationalCore ∧ ImplementedBy(self, c)) Fulfillment ∀ts: Technostructure, dep: Dependency (dep.type = goal ∧ dep.depender = ml ∧ dep.dependee = ts) ∧ Fulfilled(self)) → ♦Fulfilled(dep) [The fulfillment of the Operational management goal implies that all goal dependencies between the Middle Line as depender and the Technostructure as dependee have been achieved some time in the past. Invariant properties specifies that Operational Management goal has to be consistent with Strategic Management softgoal and that there exists a coordination task (a task dependency between MiddleLine and Operational
•
•
•
∀inst1, inst2: StrategicApex → inst1 = inst2 [There is a single instance of the Strategic Apex (the same constraint also holds for the Middle Line, the Technostructure, the Support and the Operational Core)] ∀sa: StrategicApex, te: Technostructure, ml: MiddleLine, su: Support, dep: Dependency (dep.dependee = sa ∧ (dep.depender = te ∨ dep.depender = ml ∨ dep.depender = su) → dep.type = softgoal) [Only softgoal dependencies are permitted between the Strategic Apex as dependee and the Technostructure, the Middle Line, and the Support as dependers] ∀sa: StrategicApex, te: T echnostructure, ml: M iddleLine, su: Support, dep: Dependency: (dep.depender = sa ∧ (dep.dependee = te ∨ dep.dependee = ml ∨ dep.dependee = su) → dep.type = goal) [Only goal dependencies are permitted between the Technostructure, the Middle Line, and the Support as dependee, and the Stategic Apex as depender] ∀su: Support, ml: M iddleLine, dep: Dependency ((dep.dependee = su ∧ dep.depender = ml) → dep.type = goal) [Only task dependencies are permitted between the Middle Agency and the Operational Core]
1149
Specifying Software Models with Organizational Styles
•
•
∀te: T echnostructure, oc: OperationalCore, dep: Dependency ((dep.dependee = te ∧ dep.depender = oc) → (dep.type = task ∨ dep.type = resource)) [Only resource or task dependencies are permitted between the Technostructure and the Operational Core (the same constraint also holds for the Support)] ∀a: Actor, ml: M iddleLine, (∃dep: Dependency(dep.depender = a ∧ dep.dependee = ml) ∨ (dep.dependee = a ∧ dep.depender = ml) → ((∃sa: StrategicApex(a = sa)) ∨ (∃su: Support(a = su) ∨ (∃te: T echnostructure(a = te)) ∨ (∃op: OperationalCore (a = op)) [No dependency is permitted between an external actor and the Middle Agency (the same constraint also holds for the Operational Core)]
This specification can be used to establish that a certain i* model does constitute an instance of the Structure-in-5 pattern. For example, the i* model of Figure 1 can be shown to be such an instance, in which the actors are instances of the Structure-in-5 actor classes (e.g., Direction and IT&Documentation are instances of the Strategic Apex and the Support, respectively), dependencies are instances of Structure-in-5 dependencies classes (e.g., Agency Global Strategy is an instance the Strategic Management), and all above global properties are enforced (e.g., since there are only two task dependencies between Campaigns Management and Graphics&Edition, the fourth property holds).
1150
Joint Venture We describe here two alliances – Airbus (Dussauge & Garrette 1999) and a more detailed one, Carsid (Wautelet, Kolp & Achbany 2006) – that will serve to model the joint venture structure as an organizational pattern and propose a semi-formal specification. Airbus. The Airbus Industrie joint venture coordinates collaborative activities between European aeronautic manufacturers to built and market airbus aircrafts. The joint venture involves four partners: British Aerospace (UK), Aerospatiale (France), DASA (Daimler-Benz Aerospace, Germany) and CASA (Construcciones Aeronauticas SA, Spain). Research, development and production tasks have been distributed among the partners, avoiding any duplication. Aerospatiale is mainly responsible for developing and manufacturing the cockpit of the aircraft and for system integration. DASA develops and manufactures the fuselage, British Aerospace the wings and CASA the tail unit. Final assembly is carried out in Toulouse (France) by Aerospatiale. Unlike production, commercial and decisional activities have not been split between partners. All strategy, marketing, sales and after-sales operations are entrusted to the Airbus Industrie joint venture, which is the only interface with external stakeholders such as customers. To buy an Airbus, or to maintain their fleet, customer airlines could not approach one or other of the partner firms directly, but has to deal with Airbus Industrie. Airbus Industrie, which is a real manufacturing company, defines the alliance’s product policy and elaborates the specifications of each new model of aircraft to be launched. Airbus defends the point of view and interests of the alliance as a whole, even against the partner companies themselves when the individual goals of the latter enter into conflict with the collective goals of the alliance. Figure 4 models the organization of the Airbus Industrie joint venture using the i* strategic
Specifying Software Models with Organizational Styles
Figure 4. The Airbus Industrie Joint Venture
dependency model. Airbus assumes two roles: Airbus Industrie and Airbus Joint Venture. Airbus Industrie deals with demands from customers, Customer depends on it to receive airbus aircrafts or maintenance services. The Airbus Joint Venture role ensures the interface for the four partners (CASA, Aerospatiale, British Aerospace and DASA) with Airbus Industrie defining Airbus strategic policy, managing conflicts between the four Airbus partners, defending the interests of the whole alliance and defining new aircrafts specifications. Airbus Joint Venture coordinates the four partners ensuring that each of them assumes a specific task in the building of Airbus aircrafts: wings building for British Aerospace, tail unit building for CASA, cockpit building and aircraft assembling for Aerospace and fuselage building for DASA. Since Aerospatiale assumes two different tasks, it is modeled as two roles: Aerospatiale Manufacturing and Aerospatiale Assembling. Aerospatiale Assembling depends on each of the four partners to receive the different parts of the planes. Carsid (Carolo-Sidérurgie) is a joint venture that has recently arisen from the global concentration movement in the steel industry. The alliance, physically located in the steel basin of Charleroi in Belgium, has been formed by the steel companies
Duferco (Italy), Usinor (France) – that also partially owns Cockerill-Sambre (Belgium) through the Arcelor group – and Sogepa (Belgium), a public investment company, representing the Walloon Region Government. Usinor has also brought its subsidiary Carlam in the alliance. Roughly speaking, the aim of a steel manufacturing company like CARSID is to extract iron from the ore and to turn it into semi-finished steel products. Several steps compose the transformation process, each step is generally assumed by a specific metallurgic plant: •
•
•
Sintering Plant. Sintering is the preparation of the iron ore for the blast furnace. The minerals are crushed and calibrated to form a sinter charge. Coking Plant. Coal is distilled (i.e., heated in an air-impoverished environment in order to prevent combustion) to produce coke. Blast Furnace. Coke is used as a combustion agent and as a reducing agent to removes the oxygen from the sinter charge. The coke and sinter charge are loaded together into the blast furnace to produce cast iron.
1151
Specifying Software Models with Organizational Styles
•
•
Steel Making Plant. Different steps (desulphuration, oxidation, steel adjustment, cooling, . . .) are necessary to turn cast iron into steel slabs and billets. First, elements other that iron are remove to give molten steel. Then supplementary elements (titanium, niobium, vanadium, . . .) are added to make a more robust alloy. Finally, the result – finished steel – is solidified to produce slabs and billets. Rolling Mill. The manufacture of semi-finished products involves a process known as hot rolling. Hot-rolled products are of two categories: flat (plates, coiled sheets, sheeting, strips, . . .) produced from steel slabs and long (wire, bars, rails, beams, girders, . . .) produced from steel billets.
Figure 5 models the organization of the Carsid joint venture in i*. Carsid assumes two roles Carsid S.A. (”Société Anonyme” – the english equivalent is ”Ltd”) and Carsid Joint Venture. Carsid S. A. is the legal and contractual interface of the joint venture. It handles the sales of steel semi-finished products (bars, plates, rails, sheets, etc. but also slabs, billets) and co-products (coke that does not meet blast furnace requireFigure 5. The Carsid Joint Venture
1152
ments, rich gases from the different plants, godroon, naphtalin, etc.) to external industries such as vehicle (automobile, train, boat, . . .) manufacturers, foundries, gas companies, building companies. It is also in charge of the proper environment policy, a strategic aspect for steelworks that are polluting plants. Most important, Carsid has been set up with the help of the Walloon Region to guarantee job security for about 2000 workers in the basin of Charleroi. Indeed, the steel industry in general and the Walloon metallurgical basins in particular are sectors in difficulty with high unemployment rates. As a corrolar, the joint venture is committed to improve regional economy and maintain work in the region. Carsid has then been contractually obliged to plan maintenance investment (e.g., blast furnace refection, renovation of coke oven batteries, . . .) and develop production plans involving regional sub contractors and suppliers. Since steelmaking is a hard and dangerous work sector, Carsid, like any other steelworks, is legally committed to respect, develop and promote accident prevention standards. The Carsid joint venture itself coordinates the steel manufacturing process. The sintering phase to prepare iron ore is the responsibility of Duferco
Specifying Software Models with Organizational Styles
Sintering Plant while Usinor Coking Plant, distills coal to turn it onto coke. The sinter charge and coke are used by Cokerill Blast Furnace to produce cast iron by removing oxygen from sinter. Duferco Steel Making Plant transforms cast iron into steel to produce slabs and billets for Carlam Rolling Mill in charge of the hot rolling tasks. Carlam (Carolo-Laminoir). Sogepa, the public partner, has the responsibility to develop regional initiative to promote Carsid activities, particularly in the Walloon Region and in Belgium. Figure 6 abstracts the joint venture structures explored in the case studies of Figures 4 and 5. The case studies suggest a number of constraints to supplement the basic pattern: • •
•
Partners depend on each other for providing and receiving resources. Operation coordination is ensured by the joint manager actor which depends on partners for the accomplishment of these assigned tasks. The joint manager actor must assume two roles: a private interface role to coordinate partners of the alliance and a public interface role to take strategic decisions, define policy for the private interface and represents the interests of the whole partnership with respect to external stakeholders.
Part of the Joint Venture pattern specification is in the following: Role JointManagerPrivateInterface Goal CoordinatePatterns Role JointManagerPublicInterface Goal TakeStrategicDecision SoftGoal RepresentPartnershipInterests Actor Partner
and the following structural (global) properties must be satisfied: •
•
•
∀jmpri1, jmpri2: J ointManagerPrivateInterface (jmpri1 = jmpri2) [Only one instance of the joint manager] ∀p1, p2: P artner, dep: Dependency (((dep. depender = p1 ∧ dep.dependee = p2) ∨ (dep.depender = p2 ∧ dep.dependee = p1)) → (dep.type = resource)) [Only resource dependencies between partners] ∀jmpri: JointManagerPrivateInterface, p: Partner, dep: Dependency((dep.dependee = p ∧ dep.depender = jmpri) →dep.type = task)
Figure 6. The Joint Venture pattern
1153
Specifying Software Models with Organizational Styles
•
•
•
[Only task dependencies between partners and the joint manager, with the joint manager as depender] ∀jmpri: JointM anagerPrivateInterf ace, jmpui: JointM anagerPublicInterf ace, dep: Dependency ((dep.depender = jmpri ∧ dep.dependee = jmpui) → (dep.type = goal ∨ dep.type = softgoal)) [Only goal or softgoal dependencies between the joint manager roles] ∀dep: Dependency, p1: Partner ((dep.depender = p1 ∨ dep.dependee = p1) → ((∃p2: P artner(p1≠ p2 ∧ (dep.depender = p2 ∨ dep.dependee = p2)) ∨ (∃jmpi: J ointM anagerP rivateInterf ace ((dep.depender = jmpi ∨ dep. dependee = jmpi)))) [Partners only have relationships with other partners or the joint manager private interface] ∀dep: Dependency, jmpi: J ointM anagerP rivateInterf ace ((dep.depender = jpmi ∨ dep.dependee = jpmi) → ((∃p: P artner((dep.depender = p ∨ dep.dependee = p))) ∨ (∃jmpui: J ointM anagerP ublicInterf ace ((dep. depender = jmpui ∨ dep.dependee = jmpui)))) [The joint manager private interface only has relationships with the joint manager public interface or partners]
EVALUAtION Patterns can be compared and evaluated with quality attributes (Shaw & Garlan 1996), also called non-functional requirements (Chung, Nixon, Yu & Mylopoulos 2000) For instance, the requirements seem particularly relevant for organizational structures (Do, Faulkner & Kolp 2003, Kolp et al. 2006): Predictability (Woods & Barbacci 1999). Actors can have a high degree of autonomy
1154
(Wooldridge & Jennings 1995) in the way that they undertake action and communication in their domains. It can be then difficult to predict individual characteristics as part of determining the behavior of the system at large. Generally, predictability is in contrast with the actors capabilities to be adaptive and responsive: actors must be predictable enough to anticipate and plan actions while being responsive and adaptive to unexpected situations. Security. Actors are often able to identify their own data and knowledge sources and they may undertake additional actions based on these sources (Woods & Barbacci 1999). Strategies for verifying authenticity for these data sources by individual actors are an important concern in the evaluation of overall system quality since, in addition to possibly misleading information acquired by actors, there is the danger of hostile external entities spoofing the system to acquire information accorded to trusted domain actors. Adaptability. Actors may be required to adapt to modifications in their environment. They may include changes to the component’s communication protocol or possibly the dynamic introduction of a new kind of component previously unknown or the manipulations of existing actors. Generally, adaptability depends on the capabilities of the single actors to learn and predict the changes of the environments in which they act (Weiss 1997), and also their capability to make diagnosis (Horling, Lesser, Vincent, Bazzan & Xuan 1999), that is being able to detect and determine the causes of a fault based on its symptoms. However, successful organization environments tend to balance the degree of reactivity and predictability of the single actors with their capabilities to be adaptive. Coordinability. Actors are not particularly useful unless they are able to coordinate with other agents. Coordination is generally (Jennings 1996) used to distribute expertise, resources or information among the actors (actors may have different capabilities, specialized knowledge, dif-
Specifying Software Models with Organizational Styles
ferent sources of information, resources, responsibilities, limitations, charges for services, etc.), solve interdependencies between actors’ actions (interdependence occur when goal undertaken by individual actors are related), meet global constraints (when the solution being developed by a group of actors must satisfy certain conditions if is to be deemed successful), and to make the system efficient (even when individuals can function independently, thereby obviating the need for coordination, information discovered by one actor can be of sufficient use to another actor that both actors can solve the problem twice as fast). Coordination can be realized in two ways: •
•
Cooperativity. Actors must be able to coordinate with other entities to achieve a common purpose or simply their local goals. Cooperation can either be communicative in that the actors communicate (the intentional sending and receiving of signals) with each other in order to cooperate or it can be non-communicative (Doran, Franklin, Jennings & Norman 1997). In the latter case, actors coordinate their cooperative activity by each observing and reacting to the behaviour of the other. In deliberative organizations, actors jointly plan their actions so as to cooperate with each other. Competitivity. Deliberative negotiating organization (Doran et al. 1997) are like deliberative one, except that they have an added dose of competition. The success of one actors implies the failure of others.
Availability. Actors that offer services to other actors must implicitly or explicitly guard against the interruption of offered services. Fallibility-Tolerance. A failure of one actor does not necessarily imply a failure of the whole organization. The organization then needs to check the completeness and the accuracy of information and knowledge transactions and workflows. To
prevent failure, different actors can have similar or replicated capabilities and refer to more than one actor for a specific behavior. Modularity (Shehory 1998) increases efficiency of service execution, reduces interaction overhead and usually enables high flexibility. On the other hand, it implies constraints on interorganization communication. Aggregability. Some actors are parts of other actors. They surrender to the control of the composite entity. This control results in efficient workflow execution and low interaction overhead, however prevents the organization to benefit from flexibility. As an illustration, we evaluate the patterns with respect to coordinativity, predictability, fallibility-tolerance and adaptability. The evaluation can be done in a similar way for the other non-functional requirements. Due to the lack of space, we refer the author to the bibliography for the other attributes. •
The structure-in-5 improves coordinativity among actors by differentiating the data hierarchy -the support actor -from the control hierarchy -supported by the operational core, technostructure, middle agency and strategic apex. The existence of three different levels of abstraction (1 -Operational Core; 2 -Technostructure, Middle Line and Support; 3 -Strategic Apex) addresses the need for managing predictability. Besides, higher levels are more abstract than lower levels: lower levels only involve resources and task dependencies while higher ones propose intentional (goals and soft-goals) relationships. Checks and control mechanisms can be integrated at different levels of abstraction assuming redundancy from different perspectives and increase considerably fallibility-tolerance. Since the structure-in-5 separates data and control hierarchies, integrity of these two hierarchies can also be verified independently.
1155
Specifying Software Models with Organizational Styles
•
The structure-in-5 separates independently the typical components of an organization, isolating them from each other and allowing then dynamic adaptability. But since it is restricted to no more than 5 major components, more refinement has to take place inside the components. The joint venture supports coordinativity in the sense that each partner actor interacts via the joint manager for strategic decisions. Partners indicate their interest, and the joint manager either returns them the strategic information immediately or mediates the request to some other partners. However, since partners are usually heterogeneous, it could be a drawback to define a common interaction background. The central position and role of the joint manager is a means for resolving conflicts and preventing unpredictability. Through its joint manager, the joint-venture proposes a central communication controller. It is less clear how the joint venture pattern addresses fallibility-tolerance, notably reliability. However, exceptions, supervision, and monitoring can improve its overall score with respect to these qualities. Manipulation of partners can be done easily to adapt the structure by registering new ones to the joint manager. However, since partners can also exchange resources directly with each other, existing dependencies should be updated as well. The joint manager cannot be removed due to its central position.
Table 2 summarizes the strengths and weaknesses of the reviewed patterns. To cope with non-functional requirements and select the pattern for the organizational setting, we go through a means-ends analysis using the non functional requirements (NFRs) framework (Chung et al. 2000). We refine the identified requirements to sub-requirements that are more
1156
precise and evaluates alternative organizational patterns against them, as shown in Figure 7. The analysis is intended to make explicit the space of alternatives for fulfilling the top-level requirements. The patterns are represented as operationalized requirements (saying, roughly, “model the organizational setting of the system with the pyramid, structure-in-5, joint venture, arm’slength . . . pattern”). The evaluation results in contribution relationships from the patterns to the non-functional requirements, labeled “+”, “++”, “–”, “– –”. Design rationale is represented by claims drawn as dashed clouds. They make it possible for domain characteristics (such as priorities) to be considered and properly reflected into the decision making process, e.g., to provide reasons for selecting or rejecting possible solutions (+, –). Exclamation marks (! and !!) are used to mark priority requirements while a check-mark “√” indicates an accepted requirements and a cross “ X ” labels a denied requirement. Relationships types (AND, OR, ++, +, –, and – –) between NFRs are formalized to offer a tractable proof procedure. AND/OR relationships corresponds to the classical AND/OR decomposition relationships: if requirement R0 is ANDdecomposed (respectively, OR-decomposed) into R1,R2,...,Rn then all (at least one) of the requirements must be satisfied for the requirement R0 to be satisfied. So for instance, in Figure 7, Coordinativity is AND decomposed into Distributivity, Participability, and Commonality. Relationships “+” and “– ” model respectively a situation where a requirement contributes positively or negatively Table 2. Strengths and weaknesses of some patterns Structure-in-5
Joint-Venture
Coordinativity
++
+
Predictability
+
+
++
+
+
+
Fallibility-Tolerance Adaptability
Specifying Software Models with Organizational Styles
Figure 7. Partial evaluation for organizational patterns
towards the satisfaction of another one. For instance, in Figure 7, Joint Venture contributes positively to the satisfaction of Distributivity and negatively to the Reliability. In addition, relationships “++” and “– –” model a situation where the satisfaction of a requirement implies the satisfaction or denial of another goal. In Figure 7, for instance, the satisfaction of Structure-in-5 implies the satisfaction of requirements Reliability and Redundancy. The analysis for selecting an organizational setting that meets the requirements of the system to build is based on propagation algorithms presented in (Giorgini, Mylopoulos, Nicchiarelli & Sebastiani 2002). Basically, the idea is to assign a set of initial labels for some requirements of the graph, about their satisfiability and deniability, and see how this assignment leads to the labels propagation for other requirements. In particular, we
adopt from (Giorgini, Mylopoulos, Nicchiarelli & Sebastiani 2002) both qualitative and a numerical axiomatization for goal (requirements) modeling primitives and label propagation algorithms that are shown to be sound and complete with respect to their respective axiomatization. In the following, a brief description of the qualitative algorithm. To each requirement R, we associate two variables Sat(R), Den(R) ranging in {F, P, N} (full, partial, none) such that F > P > N, representing the current evidence of satisfiability and deniability of the requirement R. E.g., Sat(Ri) > = P states there is at least a partial evidence that Ai is satisfiable. Starting from assigning an initial set of input values for Sat(Ri), Den(Ri) to (a subset of) the requirements in the graph, we propagate the values through the propagation rules of Table 3. Propagation rules for AND (respectively OR) relationship are min-value function for satisfiabil-
Table 3. Propagation rules for satisfiability in the qualitative framework. A dual table is given for deniability propagation.
Sat(G1) Den(G1)
+
-
++
--
G2→ G1
G2→ G1
G2→ G1
G2→ G1
min{ Sat(G2), P } N
min{ Sat(G2), P }
Sat(G2) N
N Sat(G2)
1157
Specifying Software Models with Organizational Styles
ity (max-value function) and max-value function (min-value function) for deniability. A dual table is given for deniability propagation. The schema of the algorithm is described in Figure 8. Initial, Current and Old are arrays of pairs Sat(Ri), Den(Ri), one for each Ri of the graph, representing respectively the initial, current and previous labeling status of the graph. The array Current is first initialized to the initial values Initial given in input by the user. At each step, for every requirement Ri, Sat(Ri), Den(Ri) is updated by propagating the values of the previous step. This is done until a fixpoint is reached, that is, no updating is mode possible (Current == Old). The updating of Sat(Ri), Den(Ri) works as follows. For each relation Reli incoming in Gi, the satisfiability and deniability values satii and denii derived from the old values of the source requirements are computed by applying the rules of Table 3. Then, it is returned the maximum value between those computed and the old values.
A rEQUIrEMENts-DrIVEN MEtHODOLOGy This research is conducted in the context of the early requirements phase of Tropos (Giorgini et al. 2004, Giorgini et al. 2005), a software development methodology for building multi-agent systems which is founded on the concepts of actor and goal. The Tropos methodology adopts ideas from multi-agent systems technologies, mostly to define the detailed design and implementation phase, and ideas from requirements engineering and organizational modeling, where agents/actors and goals have been used heavily for early requirements analysis (Dardenne et al. 1993, Yu 1995). In particular, the Tropos project adopts Eric Yu’s i* model which offers actors (agents, roles, or positions), goals, and actor dependencies as primitive concepts for analyzing an application during organizational modeling. The key as-
1158
Figure 8. Schema of the label propagation algorithm
sumption which distinguishes Tropos from other methodologies is that actors and goals are used as fundamental concepts for analysis and design during all phases of software development, not just requirements analysis. That means that, in the light of this paper, Tropos describes in terms of the same concepts and patterns the organizational environment within which a system will eventually operate, as well as the system itself. Tropos spans four phases of software development: 1.
2.
3.
4.
Organizational modeling, concerned with the understanding of a problem by studying an organizational setting; the output is an organizational model which includes relevant actors, their goals and dependencies. Requirements analysis, in which the systemto-be is described within its operational environment, along with relevant functions and qualities. Architectural design, in which the system’s global architecture is defined in terms of subsystems, interconnected through data, control and dependencies. Detailed design, in which behaviour of each architectural component is defined in further detail.
Specifying Software Models with Organizational Styles
cONcLUsION
rEFErENcEs
Modelers need to rely on patterns, styles, and idioms, to build their models, whatever the purpose. We argue that, as with other phases of software development, organizational modeling can be facilitated by the adoption of organizational patterns. This paper focuses on two such patterns and studies them in detail, through examples, a formalization using Formal Tropos, and an evaluation with respect to desirable requirements. There have been many proposals for software patterns (e.g., (Kolp, Do & Faulkner 2005)) since the original work on design patterns (Gamma, Helm, Johnson & Vlissides 1995). Some of this work focuses on requirements patterns. For example, (Konrad & Cheng 2002) proposes a set of requirements patterns for embedded software systems. These patterns are represented in UML and cover both structural and behavioral aspects of a requirements specification. Along similar lines, (Fowler 1997) proposes some general patterns in UML. In both cases, the focus is on requirements analysis, and the modeling language used is UML. On a different path, (Gross & Yu 2002) proposes a systematic approach for evaluating design patterns with respect to non-functional requirements (e.g., security, performance, reliability). Our approach differs from this work primarily in the fact that our proposal is founded on ideas from Organization Theory and Strategic Alliances literature. We have already described organizational patterns but to be used for designing multi-agent system architectures (Kolp et al. 2006) and e-business systems (Kolp et al. 2004). Considering real world organizations as a metaphor, systems involving many software actors, such as multi-agent systems could benefit from the same organizational models. In the present paper, we have focused on patterns for modeling organizational settings, rather than software systems and emphasized the need for organizational abstractions to better match the operational environment of the system-to-be during organizational modeling.
Anton, A. I. (1996). Goal-based requirements analysis. In Proceedings of the 2nd Int. Conf. On Requirements Analysis, ICRE’96’, Colorado Spring, USA, pp. 136–144. Bates, L. D. V. (2006), Advertising Agency. At http://www.ldv.be. Chung, L. K., Nixon, B., Yu, E., & Mylopoulos, J. (2000), Non-Functional Requirements in Software Engineering. Kluwer Publishing. Dardenne, A., van Lamsweerde, A., & Fickas, S. (1993). Goal-directed requirements acquisition. Science of Computer Programming, 20(1–2), 3–50. doi:10.1016/0167-6423(93)90021-G Do, T. T., Faulkner, S., & Kolp, M. (2003). Organizational multi-agent architectures for information systems. In Proc. of the 5th Int. Conf. on Enterprise Information Systems, ICEIS’03’, Angers, France, pp. 89–96. Doran, J. E., Franklin, S., Jennings, N. R., & Norman, T. J. (1997). On cooperation in multi-agent systems. The Knowledge Engineering Review, 12(3), 309–314. doi:10.1017/ S0269888997003111 Dussauge, P., & Garrette, B. (1999). Cooperative Strategy: Competing Successfully Through Strategic Alliances. Wiley and Sons. Fowler, M. (1997), Analysis Patterns: Reusable Object Models. Addison-Wesley. Fuxman, A., Liu, L., Mylopoulos, J., Roveri, M., & Traverso, P. (2004). Specifying and analyzing early requirements in tropos. Requirements Engineering, 9(2), 132–150. doi:10.1007/s00766004-0191-7 Gamma, E., Helm, R., Johnson, J., & Vlissides, J. (1995). Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley.
1159
Specifying Software Models with Organizational Styles
Giorgini, P., Kolp, M., & Mylopoulos, J. (2002), Multi-agent and software architecture: A comparative case study. In Proc. of the 3rd International Workshop on Agent Software Engineering, AOSE’02’, Bologna, Italy, pp. 101–112.
Kolp, M., Do, T., & Faulkner, S. (2004). A socialdriven design of e-business system. In Software Engineering for Multi-Agent Systems III, Research Issues and Practical Applications. Edinburg, UK, pp. 70–84.
Giorgini, P., Kolp, M., Mylopoulos, J., & Castro, J. (2005). A requirements-driven methodology for agent-oriented software. In B. Henderson-Sellers & P. Giorgini, (Eds.), Agent Oriented Methodologies. Idea Group Publishing, pp. 20–46.
Kolp, M., Do, T., & Faulkner, S. (2005). Introspecting agent-oriented design patterns. In S. K. Chang, (ed.), Handbook of Software Engineering and Knowledge Engineering, 3, Recent Advances’, World Scientific, pp. 151–177.
Giorgini, P., Kolp, M., Mylopoulos, J., & Pistore, M. (2004). The tropos methodology. In M.-P. G. F. Bergenti & F. Zambonelli, (eds.), Methodologies and Software Engineering for Agent Systems. Kluwer, pp. 89–105.
Kolp, M., Giorgini, P., & Mylopoulos, J. (2002a). Information systems development through social structures. In Proc. of the 14th Int. Conf. on Software Engineering and Knowledge Engineering, SEKE’02’. Ishia, Italy, pp. 183–190.
Giorgini, P., Mylopoulos, J., Nicchiarelli, E., & Sebastiani, R. (2002). Reasoning with goal models. In Proceedings of the 21st International Conference on Conceptual Modeling (ER 2002). Tampere, Finland, pp. 167–181.
Kolp, M., Giorgini, P., & Mylopoulos, J. (2002b), Organizational multi-agent architecture: A mobile robot example. In Proc. of the 1st Int. Conf. on Autonomous Agent and Multi Agent Systems, AAMAS’02’. Bologna, Italy, pp. 94–95.
GMT. (2006). Gmt consulting group. http://www. gmtgroup.com/.
Kolp, M., Giorgini, P., & Mylopoulos, J. (2003). Organizational patterns for early requirements analysis. In Proc. of the 15th Int. Conf. on Advanced Information Systems, CAiSE’03’. Velden, Austria, pp. 617–632.
Gomes-Casseres, B. (1996). The alliance revolution: the new shape of business rivalry. Harvard University Press. Gross, D., & Yu, E. (2002). From non-functional requirements to design through patterns. Requirements Engineering, 6(1), 18–36. doi:10.1007/ s007660170013 Horling, B., Lesser, V., Vincent, R., Bazzan, A., & Xuan, P. (1999). Diagnosis as an integral part of multi-agent adaptability. Technical Report UM-CS-1999-003, University of Massachusetts. Jennings, N. R. (1996). Coordination techniques for distributed artificial intelligence. In G. M. P. O’Hare & N. R. Jennings, (eds.), Foundations of Distributed Artificial Intelligence. Wiley, pp. 187–210.
1160
Kolp, M., Giorgini, P., & Mylopoulos, J. (2006). Multi-agent architectures as organizational structures. Autonomous Agents and Multi-Agent Systems, 13(1), 3–25. doi:10.1007/s10458-0065717-6 Konrad, S., & Cheng, B. (2002). Requirements patterns for embedded systems. In Proc. of the 10th IEEE Joint International Requirements Engineering Conference, RE’02’. Essen, Germany, pp. 127–136. Mintzberg, H. (1992). Structure in fives: Designing effective organizations. Prentice-Hall.
Specifying Software Models with Organizational Styles
Morabito, J., Sack, I., & Bhate, A. (1999). Organization modeling: Innovative architectures for the 21st century. Prentice Hall. Scott, W. R. (1998). Organizations: Rational, natural, and open systems. Prentice Hall. Segil, L. (1996). Intelligent business alliances: How to profit using today’s most important strategic tool. Times Business. Shaw, M., & Garlan, D. (1996). Software Architecture: Perspectives on an Emerging Discipline, Prentice Hall. Shehory, O. (1998). Architectural properties of multi-agent systems. Technical Report CMU-RITR-98-28, Carnegie Mellon University. Wautelet, Y., Kolp, M., & Achbany, Y. (2006). S-tropos: An iterative spem-centric software project management process. Technical Report IAG Working paper 06/01, IAGISYS Information Systems Research Unit, Catholic University of Louvain, Belgium. http://www.iag.ucl.ac.be/wp/.
Weiss, G. (Ed.). (1997). Learning in DAI Systems. Springer Verlag. Woods, S. G., & Barbacci, M. (1999). Architectural evaluation of collaborative agent-based systems. Technical Report SEI-99-TR-025, SEI, Carnegie Mellon University, Pittsburgh, USA. Wooldridge, M., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. The Knowledge Engineering Review, 2(10). Yoshino, M. Y., & Srinivasa Rangan, U. (1995). Strategic alliances: An entrepreneurial approach to globalization. Harvard Business School Press. Yu, E. (1995). Modelling Strategic Relationships for Process Reengineering. PhD thesis, University of Toronto, Department of Computer Science.
This work was previously published in Global Implications of Modern Enterprise Information Systems: Technologies and Applications, edited by Angappa Gunasekaran, pp. 91-113, copyright 2009 by Information Science Reference (an imprint of IGI Global).
1161
1162
Chapter 4.18
Mobile Strategy for E-Business Solution Anthony S. Atkins Staffordshire University, UK A. K. Hairul Nizam Pengiran Haji Ali Staffordshire University, UK
AbstrAct It is becoming evident that mobile technology can enhance a current e-business system to provide competitive advantage in business activities. These enhancements in mobile device applications such as in mobile hotel check-in system, m-payment system for parking tickets, and mobile donor transplant system are evolving with usage of wireless technology such as Wi-Fi, Bluetooth, and WiMax (Worldwide Interoperability for Microwave Access). Other examples include wearable mobile technologies used in military observation tactics and civilian clothing accessories for entertainment purposes. The lack of current mobile strategies, can cause some businesses to over spend or under DOI: 10.4018/978-1-60566-156-8.ch010
utilize potential mobile applications. The use of a mobile strategic framework will help provide the insights to improving companies in their commercial operations and examples of these mobile solutions are outlined in relation to commercial applications which have been implemented in hospitals, retail Supply Chain Management (SCM), and in Customer Relationship Management (CRM). These types of systems are known to improve quality of service and provide competitive advantage. A mobile framework is presented to introduce the application of user mobility to mobile usage as an extension of existing Intranet, Extranet, and Internet e-business application. This Mobile Business Application Framework could assist practitioners in identifying the financial and competitive aspects in relation to mobile technology applications into their business infrastructure.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Mobile Strategy for E-Business Solution
INtrODUctION Mobility has become a key factor in Information Technology (IT) strategy (Savvas, 2007). The literature indicates that the use of mobile devices can assist the communication networks in business activities, such as Supply Chain Management (SCM), parcel tracking and Customer Relation Management (CRM) (UPS, 2005). Mobile Commerce (m-Commerce) operates where mobile devices facilitate business operations that enhance and improve commercial activities (Varshney and Vetter, 2002). In commercial organisations, mobile devices such as Personal Digital Assistant (PDA) phones can help business users to organise their daily activities such as taking notes, arranging appointments, storing contacts phones numbers, receiving emails and surfing the internet, from a single mobile device (Jervanpaa, 2006). Stockbrokers can receive critical information on their PDAs such as changes in financial stocks and shares at anytime and also access financial documents and make amendments with the use of built in word processing applications. Short Messaging Services (SMS) is another mobile solution making mobile payments or m-Payments (Serbedzija et al, 2005). In Croatia, parking in the city can be difficult as parking areas can be on either side of the road, or one large central lot located far from the drivers destination, the hassle of walking to ‘top up’ a parking meter and then walking back to the destination can be tedious. The driver receives an SMS acknowledgement text to indicate the expiration time of the parking meter and can then choose to return to the vehicle or extend the parking meter by replying to the SMS text (Serbedzija, 2005). Addenbrooke’s Hospital in Cambridge, highly regarded for the success of kidney operations in the last few years is utilising Blackberry’s mobile phone application to wirelessly connect to the hospital’s centralised database to keep track of available organ donors. The use of mobile devices allows surgeons and doctors to make on the spot
decisions as they track available donors that match with the patient’s profile. An indirect application of using these mobile phones in this situation can also be used to assist emergency crews in applying first aid. In the event of an accident, by taking images of any unusual wounds and forwarding them to a specialists, a remote doctor can advise the ambulance crew to apply appropriate first aid to the victim (British Red Cross, 2004). The agility and mobility of using these mobile devices can prove advantageous to commercial operations (Harrington, 2006), especially e-commerce applications. The increased demand of mobile devices is due to the improvements of wireless technology that has allowed managers to select from a wide variety of mobile technology to apply to their business. The use of m-commerce can provide both competitive advantage and modern image to the business. However, there are risks associated with it, such as the cost of development, security issues relating to viruses and privacy policies. It is imperative that the business should analyse the type of mobile devices which can best enhance their business activity (Varshney and Vetter, 2002). The use of strategic IT frameworks could be used for assessing mobile business application to aid business practitioners in decision making. A combination in the use of mobile applications and e-business can offer competitive advantage by creating new platforms to reach global markets (Farhoomand, 2003). The e-business services allow business companies to reduce costs and increase revenue from distribution. Increased online sales and the use of mobile devices can enhance mobile business activities. Mobile applications are currently being used in warehousing, Small and Medium Enterprise (SME) and Customer Relationship Management (CRM) systems that allow tracking of parcels etc. These types of mobile devices suit the business needs and activities of the companies concerned. This principle of distribution and transaction can be adapted for e-government or e-society applications. In these cases information can be distributed electronically
1163
Mobile Strategy for E-Business Solution
similar to the e-society i2010, which the European Commission is developing for a European Information Society that promotes growth and jobs in the EU (Hines, 2007). John Hopkins Hospitals has implemented mobile applications to assists in their medical activities, such as e-prescriptions. This implementation saves the hospitals $1,000 per day by providing the pharmaceutical information and the medicine that the patient needs (Brian, 2006). The use of data retrieval through a mobile device can be similarly used in warehousing; Wal-Mart stocks can be identified using Radio Frequency Identification (RFID) tags read from a special mobile RFID reader (Sliwa, 2006). Nissan automobiles have a similar approach with CRM activities that allow the salesperson to answer customer queries on the spot (Greengard, 2006). There are two types of mobile data retrieval technologies, which are classified as web-enabled and standalone. The Web-enabled architecture involves mobile devices to send and retrieve information from a centralised database. With the standalone database, the data are distributed and stored on the mobile devices. The database can share the data among mobile devices, but the original data remain in the mobile device for example using PDAs for collecting questionnaire data. The type of wireless technology used, such as Wireless Local Area Network (WLAN), Wifi, Bluetooth and WiMax, will also affect the type of mobile application that a company can apply in their business activities. The most popular ones are Bluetooth and Wi-fi connections which are simple and less expensive to use. Japanese Wagamama restaurant chains in the UK (Terry, 2006) apply this type of mobile devices. Wi-fi applications are more technically diverse as they can support both Personal Digital Assistants (PDA) and computer laptops. The evolution of mobile technology has provided new opportunities by extending mobile infrastructure into the business, which enables the business to gain competitive advantage. Although there are many strategic tools and techniques available such as Porter’s competitive analysis
1164
and Value Chain (Porter, 2001), strategic tools for mobile applications appear to be limited (Varshney and Vetter, 2002; Chen and Nath, 2004), with little financial appraisal. A framework to assess strategic applications of mobile technology and pertinent infrastructure is outlined to assist business managers in making financial and technological decisions in mobile business environments.
EXtENtION tO M-cOMMErcE Improved computing infrastructure has provided e-business with an advantage, because it allows global accessibility from any access points either wired or wirelessly. The e-business serves its purpose by providing the capability to achieve revenue from low costs distribution and increased sales from the global market. Customers are able to access these websites not only from their homes, but also in cyber cafes, public hotspots where the users can access the internet using wireless LANs and even on a mobile phone. The development of 3G mobile phones and PDAs now have the ability to capture pictures and send them through MMS (Multimedia Messaging Service). The 3G phones also allow video calls, video clips, and surfing of the internet via GPRS (General Packet Radio Service) or CDMA (Code-Division Multiple Access), rather than through the classic WAP (Wireless Access Protocol). Users are now able to view web pages via their mobile devices depending whether their phones support HyperText Mark-Up Language (HTML) or WAP capabilities. The increasing amount of mobile phone users within the next decade (Roto, 2007), would enable businesses to sell products electronically over the mobile platform, similar to earlier ebusiness concepts. One example of this is Ring tones, which has generated over $600 million in sales via consumers mobile phones (Mobile Youth, 2006). An example of a mobile application that assists in improving business activities is the PDA insurance quota service (Serbedzija,
Mobile Strategy for E-Business Solution
2005). This application has been implemented in Germany where young adults find the idea of using a PDA device to electronically search insurance quotations for their automobiles very useful, particularly as they are able to electronically enter their details via the mobile device. As a result, insurance companies in Germany have used these PDA’s as their after sales activities to improve their Customer Relationship Management (CRM) performance with their customers. Unfortunately not all mobile technology can be applied to every business activities. Therefore, strategic tools and techniques are needed to assist companies in applying mobile applications into their business activities (Atkins, et al., 2006). The literature indicates that most mobile business applications are an extension of e-business rather than a separate technology (Sliwa, 2004; UPS, 2007; Hadfield, 2006; www.e-health-insider.com, 2004; Cyber-Lab, 2005).
HEALtH, sAFEty, AND PrIVAcy IssUEs This advancement of mobile devices has also brought about health and safety concerns of the intense usage and associated radiation could lead to cancer problems in the future (www.who.int, 2006). The radio frequency that mobile phones produce when they are on standby is very small, but during call times the type of wavelength it can emit is around 2 GHz during active transmission. The constant exposure to these mobile phones can cause ‘heating’ or ‘thermal’ effect to the ear when the user makes the call. This thermal effect has made the general public perceive that mobile phones could cause cancer (www.who.int, 2006). The effect on the brain when exposed to using this mobile technology is obviously a concern by the general public. The World Health Organisation (WHO) has indicated general health awareness and the possible dangers of mobile devices and of excessive use (www.who.int, 2006). The develop-
ment of 3G technologies and the introduction of WiMax can involve around 47,000 base stations in the UK of which two thirds of these installations are on existing buildings. Another concern in the usage of mobile phones by the public is the location awareness that traces the user’s mobile location anywhere in the world by satellite or cellular location awareness. This technology is similar to the use of locating GPS (Global Positioning System) which can be exploited by the military or police authorities with mobile users having no knowledge of being monitored.
MObILE APPLIcAtIONs FrAMEWOrK Strategic planning is often used in starting a new product or in improving business activities in a competitive environment. Managers often use strategic tools for forward planning, or to propose a new solution to increase business profits. The planning usually involves identifying weaknesses, or opportunities in the business environment. The use of strategic management tools can provide helpful insights in competing against competitors (Ghalayini and Noble, 1996). Some business strategic tools involve applying new technology or merging with other business services to cut costs (Shaffer and Srinivasan, 2005). Other strategies suggest concentrating on the source of income, such as ensuring customer satisfaction, or improving relations with the suppliers and buyers. Some strategic ideas are ineffective because of the inability to predict environmental changes, coordination, and communication with the suppliers and the customers. Identifying a strategic tool that can be used in specific business tends to be difficult because of the array of tools available. Business tools such as the SWOT (Strength, Weaknesses, Opportunities and Threats) and PEST (Political, Economical, Socio-Cultural and Technological) are commonly used for marketing and in IT for business strategy. Strategic tools and techniques,
1165
Mobile Strategy for E-Business Solution
such as the Strategic Grid (McFarlan et al, 1984), Porter’s Five Forces (Porter, 2001), Value Chain (Porter, 2001), IT Business Alignment tool (Henderson, 1993), can improve business activities and assist in providing competitive advantage. The proposed Mobile Business Application Framework, in Figure 1, is an aid to business management in decision making within the three regions of user mobility. The framework consists of two sections, firstly the Financial Scorecard, as shown in Section A of Figure 1. The Financial Scorecard is used to assess the business process; activities and financial standing, similar to a cost benefit analysis, and can identify the weaknesses and opportunity of the company (Lorsh, 1993). The Financial Scorecard helps the business to measure performance and resolve customer relations and strategy issues. Figure 1 illustrates that the Figure 1. Mobile business application framework
1166
financial scorecard has four types of perspective as shown in Section A: the Financial, Customer, Learning, and Growth, and Internal Business Process respectively. Each of these perspectives evaluates the strength and weaknesses of the business by assessing the previous records of its activities. An example of the use of the scorecard is when a company is considering implementing a mobile technology to improve the inventory system using Radio Frequency Identification (RFID) technology. The company could evaluate the expected benefits by using the process cycle of the Financial Scorecard. •
In Part 1 of Figure 1, the process cycle of the Financial Scorecard starts with the Financial Perspective. Management assess Financial Perspective, looking at previous
Mobile Strategy for E-Business Solution
•
•
•
•
records or accounts of the business activities, to set objectives, measures and operational goals. At the end of the business process, the management can view the record and comment on the Management Initiatives. This feedback checks if the business has reached their intended goals or requires additional procedures to enhance their business. These Scorecard procedures are repeated for the Customer, Learning and Growth, and the Internal Business Perspective. The Internal Business Process uses the Mobile RFID Technology to enhance stock location processes in the warehouse. The Management Initiatives column indicated in Part 2 of Figure 1 will be used to record the success factor at the end of the implementation. As the cycle repeats, the Management Initiatives for the Financial Perspective are assessed and recorded, to monitor the effects of implementing a mobile technology in the business. This iterative process is repeated for the Customer, and the Learning and Growth Perspective. After completing the cycle (anti-clockwise in Part 1 of Figure 1) the Internal Business Process for implementing the mobile technology is assessed against the target goals as shown in Part 3.
In Figure 1, Section B illustrates the strategic framework and identifies the type of mobile device that would be suitable to the business. The framework indicated in Part 1 illustrates a radial wave, or a ripple, which depicts the mobility range for mobile applications involving the Internet, Extranet, and Intranet (Mobile Application Region). These three regions are shown on the horizontal axis of the Framework. The implementation of the mobile RFID technology is contained within the warehouse, consequently this is an Intranet region. In each region, there are also circular
bands of mobile usage. These bands are colour coded to signify ‘critical data transfer,’ which indicates the importance of data transmission to the business applications. The three different regions differentiate the types of mobile device and determine how the software application can be implemented in relation to business operations. The type of mobile device, or application, used in the business is shown by the vertical axis of the diagram. The Intranet and Extranet region of the framework, illustrated in Figure 1, can apply a fully integrated mobile device or customised application using normal mobile phones, such as Wal-Mart Wireless Inventory Checking (Sliwa, 2004). Also users within the Internet region can use normal mobile devices but they will link or visit a web-enabled e-commerce database, to connect to an application, for example an SMS vending machine (SAFECOM, 2007).
MObILE bUsINEss APPLIcAtION FrAMEWOrK ON WAL-MArt Wal-Mart stores provide a range of goods including grocery goods, clothing, to electronics and frozen goods all in affordable prices (Sliwa, 2004). The current IT infrastructure of Wal-Mart is based on the provision of the efficient ‘upstream’ supply chain system. Their main intent is to support its strategy by providing support for the most efficient stock control, warehousing and distribution systems in the business (Cyber-Lab, 2005). The application of RFID technology has significantly improved their upstream supply chain. They are able to keep costs to a minimum by keeping track of Point-Of-Sales (POS) system and delivery times of the suppliers. Radio Frequency Identifiers (RFID) or identity tags, are small devices that look like label tags. They function by emitting small radio signals, which can be read by an RFID reader that can display a considerable amount of data tagged on them. Such devices are commonly used for locating and identifying stock
1167
Mobile Strategy for E-Business Solution
crates and inventories in warehouses. Wal-Mart is well known for applying such technology in their business activities, particularly since they receive daily crates of perishable goods. Active and Passive tags are the two main types of RFID technologies that both function different according to supply chain management in the warehouse. Active RFID tags have a built in battery which powers it continuously to emit constant Radio Frequency (RF) signals for the reader to detect. While Passive RFID relies on radio frequency energy from the reader in order to power that tag temporarily and then responds to the reader (Savi, 2007; Sensitech, 2007). In a warehouse where crates are not moved constantly, then the use of Passive RFID will suffice. Applying an Active RFID in a low activity warehouse will underutilise the RFID technology. Therefore they are best suited when there is high activity of moving crates and materials. The constant process is very dynamic and unconstrained, thus having the ability to detect multi-tag Active RFID is the best option for this situation (Savi, 2007).
Figure 2. Integrated mobile/device and application
1168
rELAtION WItH MObILE bUsINEss FrAMEWOrK In terms of comparing the proposed framework against the current inventory system in Wal-Mart, its implementation follows correspondingly to the framework and fits into the Intranet Region. This indicates that the use of the mobile devices in the warehouse is to track and identify the crates that arrive from deliveries and update accordingly into the inventory stock. This business strategy analyses the business operations, identifies the weaknesses and indicates access for improving business activities. Figure 1 Section C outlines the improvements needed to commence in the new procurement year. In the Internal Business Process Perspective, Figure 1 Section C highlights the need for the use of mobile application systems. In the next stage, the framework will illustrate to the business managers what mobile devices can be applied in this situation. Figure 2 illustrates the type of mobile devices that can be used in the Integrated Mobile Device and Application for this region of the business area. Mobile devices will include the use of RFID tag, mobile RFID readers and a database system that tracks the tagged items.
Mobile Strategy for E-Business Solution
•
•
The use of these RFID tags is to identify crates incoming and outgoing crates of the Wal-mart warehouses. The overall goal of the implementation is to improve the business activity by reducing costs and time taken to locate specific supplier and inventories.
At the “Integrated Mobile/Device and Application” stage shown in Figure 2 of the framework will determine the type of critical data transfer. The data transferred between the centralised database and mobile devices will include type of stock, date, content and location of the crates. The following includes additional features into the system:•
•
Contingency arrangements include a barcode or serial no. which the user can key in manually in the event that the RFID tag is damaged. In this instance, the original data transfer is likely to be of the ‘High’ classification, because a delayed delivery may result in a severe chain reaction (i.e. compound shipment delays) due to their use of a just in time system.
cONcLUsION The strategic framework tool presented here is designed to help managers identify where mobile technology can enhance their strategic business operations. The business strategy should align with IT strategy, organisational infrastructure and information system infrastructure, in order to provide competitive advantage. Wi-fi and Bluetooth technologies are popular in businesses because of the effective wireless range. The short wireless range allows a more secure system, as the data transmission signal may be contained within the business area. Japanese restaurant Wagamama (Terry, 2006) has implemented Bluetooth technol-
ogy within their business for order taking. WiMax, can be normally used to overcome geographical boundaries or avoid the problem of implementing cables. The proposed mobile strategic framework has been applied to several mobile commercial applicants is outlined (Pg Haji Ali, 2007; Pg Haji Ali and Atkins, 2007). It demonstrates the use of a Financial Scorecard to evaluate how the business cost effectiveness and departmental goals within the company can be assessed, allowing managers to identify and align strategic initiatives to the business (Lorsch, 1993). In the case of mobile technology, the strategic framework assesses the suitability of mobile devices for the business and considers the type of security and applications appropriate for a particular mobile device. The mobile strategic framework is designed to assist mangers to make informed decisions and increase competitive advantage in mobile infrastructure enhancements.
rEFErENcEs Alexander, L. (2007). Are Wireless Networks Safe?’ Retrieved 15 February 2007:-http://blogs. cio.com/node/656 Atkins, A. S., & Pengiran Haji Ali, A. K. Hairul Nizam, & Shah, H. (October, 2006) Extending e-business applications using mobile technology. In proceedings of International Conference on Mobile Technology, Applications and Systems, IEE Mobility Conference. Bangkok Thailand. Retrieved from http://doi.acm. org/10.1145/1292331.1292381 Brian, K. (2006). Lowering Health Care Costs Out-of-the-Box. Retrieved 27 September 2006:http://wireless.sys-con.com/read/40975.htm British Red Cross. (2004). First aid at your fingertips. Retrieved 6 July 2007:-http://www.redcross. org.uk/news.asp?id=41091
1169
Mobile Strategy for E-Business Solution
Chen, L., & Nath, R. (2004). A Framework for Mobile Business Application. Int. J. Mobile Communications, 2(4), 368–381. doi:10.1504/ IJMC.2004.005857 Cyber-Lab. (2005). A Supermarket uses RFID to Control Stock Inventory. Retrieved 12 April 2006:-http://www2.cpttm.org.mo/cyberlab/rfid/ wal-mart.html.en E-Health Insider. (2005). Addebrooke’s installs SMS Patient reminder system. Retrieved at 1 March 2006:-http://www.e-health-insider.com/ news/item.cfm?ID=1236 Farhoomand, A., & Ng, S. P. (2003). Creating Sustainable Competitive Advantage Through Internetworked Communities. Communications of the ACM, 46(9), 83–88. doi:10.1145/903893.903898 Ghalayini, A. M., & Noble, J. S. (1996). The Changing Basis of Performance Measurement. International Journal of Operations & Production Management, 16(8), 63–80. doi:10.1108/01443579610125787 Greengard, S. (2001). Customer Services Goes Wireless. Business Finance, 35. Hadfield, W. (2006). Police get Streetwyse to mobile ID checks. Computer Weekly, 4, 6. Haines, L. Digital divide is self-repairing, says UK gov (2005). Retrieved 5 February 2007:-http:// www.theregister.co.uk/2005/09/06/digital-divide/ Harrington, (2006). Standard and Guidelines WHO. Retrieved 15 February 2007:-http://http:// www.who.int/peh-emf/standards/en/ Henderson, J. C., & Venkatraman, N. (1993). Strategic Alignment: Leveraging Information Technology for Transforming Organisations. IBM Systems Journal, 32(1), 4–16.
1170
Jarvenpaa, S. L. (2006). Internet Goes Mobile: How Will Wireless Computing Affect Your Firm’s Internet Strategy? Retrieved at May 2006:-http:// web.hhs.se/cic/about/blkcoffee/Internet_Mobile_6-19_sj.pdf Lorsch, J. W. (1993). Smelling Smoke: Why Boards of Directors Need the Balanced Scorecard. Balanced Scorecard Report, September-October, pp. 9. McFarlan, F. W., James, L. M., & Philip, P. (1983). The Information Archipelago – Plotting a Course. Harvard Business Review, 61, 145–155. Mobile Youth. (2005). Retrieved 3 April 2006:http://www.mobileyouth.org/my_item/ringtone_ sales_build_600_mn_2005_revenues Pg Hj Ali, A. H. N. (2007). Mobile Business Application. Masters by Research, Computing Science, Faculty of Computing, Engineering and Technology, Staffordshire University, United Kingdom – unpublished thesis. Pg Hj Ali, A. H. N., & Atkins, A. S. (2007, June). A Strategic Business Tool For Mobile Infrastructure. Wireless Ubiquitous Computing, International Conference on Enterprise Information Systems, Funchal, Portugal. (pp.23-32). Porter, M. E. (2001, March). Strategy and the Internet. Harvard Business Review, 63–78. Roto, V. (2007). Browsing on Mobile Phones. Retrieved at 6 June 2007:-http://www.research. att.com/~rjana/WF12_Paper1.pdf Savi Technologies. (2007). Savi’s RFID Solutions. Retrieved 6 June 2007:-http://www.savi. com/index.shtml Savvas, A. (2007, February). Mobility is increasingly key to strategy. Computer Weekly, 13, 12.
Mobile Strategy for E-Business Solution
Serbedzija, N., Fabiunke, M., Schön, F., & Beyer, G. (2006). Multi-Client Car Insurance. Proceedings of the IADIS International Conference WWW/ Internet (ICWI) 2005, 2, 30-34. 19-22 October, Lisbon, Portugal, ISBN: 972-8924-02-X Shaffer, D., & Srinivasan, M. (2005). Agile IT through SOA Requires New Technologies: How Not to Fall Into the Trap of Applying Archaic Integration Technologies! Retrieved at 6 June 2007:-http://www.alignjournal.com/index. cfm?section=article&aid=310 Sliwa, C. (2004). Wal-Mart CEO promises ‘tough love’ approach to RFID use. Retrieved at 5 April 2006:-http://www.computerworld.com/mobiletopics/mobile/technology/ story/0,10801,89011,00.html Terry, L. (2006). Wireless Business and Technology. Serving up wireless. Retrieved 10 January 2006 from:-http://wbt.sys-con.com/read/40916. htm UPS. (2005). Embracing Technology. Retrieved 19 September 2005 from:-http://www.ups.com/ content/corp/about/history/1999.html Varshney, U., & Vetter, R. (2002). Mobile Commerce: Framework, Application and Networking Support. Mobile Networks and Applications, 7, 185–198. doi:10.1023/A:1014570512129
KEy tErMs AND DEFINItIONs Bluetooth Technology: Short-range wireless radio technology for connecting small devices, such as wireless PDAs and laptops, to each other and connect them in a wireless network. E-Business: Electronic Business is any business process that relies on an automated information system. M-Payment: Mobile Payment is the process for payment of goods or services with a mobile device such as a mobile phone, Personal Digital Assistant (PDA), or other wireless devices. Mobile Business Application: Commercial usage of mobile electronic transaction in business operations. Mobile Technology: Use of mobile telephony, mobile computing, and miscellaneous portable electronic devices, systems, and networks. Strategic Framework: A framework to provide a focused solution for assisting in business decision-making. Wi-Fi: Wireless Fidelity commonly used to broaden wireless interface of mobile computing devices, such as laptops in Local Area Network (LAN).
This work was previously published in Handbook of Research in Mobile Business, Second Edition: Technical, Methodological and Social Perspectives, edited by Bhuvan Unhelkar, pp. 104-112, copyright 2009 by Information Science Reference (an imprint of IGI Global).
1171
1172
Chapter 4.19
Application of Software Metrics in ERP Projects S. Parthasarathy Thiagarajar College of Engineering, India
ABSTRACT Business information system is an area of the greatest significance in any business enterprise today. Enterprise Resource Planning (ERP) projects are a growing segment of this vital area. Software engineering metrics are units of measurement used to characterize the software engineering products and processes. The research about the software process has acquired great importance in the last few years due to the growing interest of software companies in the improvement of their quality. Enterprise Resource Planning (ERP) projects are very complex products, and this fact is directly linked to their development and maintenance. One of the major reasons found in the literature for the failure of ERP projects is the poor management of software processes. In this chapter, the authors propose a Software Metrics Plan (SMP) containing different software metrics to manage software processes during ERP implementation. Two hypotheses have been formulated and tested using statistical techniques to validate the SMP. The statistical analysis of the DOI: 10.4018/978-1-61520-625-4.ch004
collected data from an ERP project supports the two hypotheses, leading to the conclusion that the software metrics are momentous in ERP projects.
INTRODUCTION Software process improvement often receives little orderly attention. If it is important enough to do, however, someone must be assigned the responsibility and given the resources to make it happen. Until this is done, it will remain a nice thing to do someday, but never today. Software engineering process is the total set of software engineering activities needed to transform a user’s requirement into software (Humphrey 2005). In other words, software process is a set of software engineering activities necessary to develop and maintain software products. The reason for defining the software process is to improve the way the work is done. By thinking about the process in an orderly way, it is possible to anticipate problems and to devise ways to either prevent or to resolve them. The software processes that are of great concern during the ERP implementation are requirements
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Application of Software Metrics in ERP Projects
instability, scheduling and software maintenance (Parthasarathy and Anbazhagan, 2006). Here, we use software metrics to manage and improve these software processes during the ERP implementation. It explains specifically how the software processes can be quantified, plotted, and analyzed so that the performance of ERP software development activities can be predicted, controlled, and guided to achieve both business and technical goals. As mentioned by Parthasarathy & Anbazhagan (2006), though there are a handful of software processes for ERP projects, the processes such as requirements stability, schedule slippage and monitoring the software maintenance tasks are considered more important and account for the performance enhancement of ERP projects. Hence, only these software processes are dealt with using software metrics in the proposed Software Metrics Plan (SMP) developed in this chapter. Many project managers acknowledge that measurement helps them understand and guide the development of their projects (Fenton et al 2003). They want to select particular metrics, perhaps as part of an overall development plan, but they do not know how to begin. The answers to a manager’s metrics questions should appear in the project’s metrics plan, so that managers and developers know what to collect, when to collect it and how the data relate to management decisions. The plan enables managers to establish a flexible and comprehensive metrics program as part of a larger process or product improvement program. The research about the software process has acquired great importance in the last few years due to the growing interest of software companies in the improvement of their quality. ERP projects are very complex products, and this fact is directly linked to their development and maintenance. The total quality management (TQM) notion of prevention rather than correction can be applied successfully in software engineering. The project schedule slippage and tracking problems during
maintenance are not uncommon. A key issue in ERP implementation is how to find a match between the ERP system and an organization’s business processes by appropriately customizing both the system and the organization (Arthur 1997). This is badly affected due to the instability in the requirements proposed by the customer and the poor capability of the ERP vendor. The phase “Gap Analysis” in ERP implementation is a step of negotiation between the company’s requirements and the functions an ERP package possesses (Carmel et al 1998). Poor requirements specification and its instability badly affects the gap analysis phase of an ERP project which in turn leads to schedule slippage and bubbles of problems during the maintenance phase of the software project. The requirements instability is probably the most important single software process issue for many organizations. The failure of many software projects can be directly linked to the requirements instability (Davenport 1998). Customization, the biggest technology headache is considered as the critical success factor for ERP implementation (Parthasarathy and Anbazhagan, 2007). Even for those companies that have successfully implemented large-scale information systems projects in the past, ERP implementation still presents a challenge, because it is not simply a large-scale software deployment exercise. Also as ERP implementation is often accompanied by large-scale organizational changes, agile software processes could not create much impact on the ERP projects (Glass 1998). One of the major reasons found in the literature for the failure of ERP projects is the poor software process management. Hence a Software Metrics Plan (SMP) has been developed to deal with the software processes discussed in the literature review and found to be more important for successful ERP implementation. The software processes dealt with in the SMP for effective software process management during the ERP implementation are:
1173
Application of Software Metrics in ERP Projects
i. )>> ii. )>> iii. )>> iv. )>> v. )>> vi. )>>
Requirements Stability Index (RSI) Schedule Slippage (SS) Arrival Rate of Problems (ARP) Closure Rate of Problems (CRP) Age of Open Problems (AOP) Age of Closed Problems (ACP)
The SMP contains a set of software metrics to manage these software processes during ERP implementation. Two hypotheses have been formulated and tested using statistical techniques to validate the SMP. The statistical analysis of the collected data from an ERP project supports these two hypotheses, leading to the conclusion that the software metrics are momentous in ERP projects. The results give fruitful information to the ERP team to identify the influence of one process on another.
SOFTWARE METRICS PLAN (SMP) A metrics plan is much like a newspaper article: it must describe the who, what, where, when, how and why of the metrics. With answers to all of these questions, whoever reads the plan knows exactly why metrics are being collected, how they are used, and how metrics fit into the larger picture of software development of maintenance. The plan usually begins with the “why”. It is in this introductory section that the plan lays down the goals or objectives of the project, describing what questions need to be answered by project members and software process management. For example, if reliability is a major concern to the developers, then the plan discusses how reliability will be defined and what reporting requirements are imposed on the project; later sections of the plan can then discuss how reliability will be measured and tracked. Next, the plan addresses “what” will be measured. In many cases, the measures are grouped or related in some way. For instance, productivity may be measured in terms of two component
1174
pieces, size and effort. So the plan will explain how size and effort are defined and then how they are combined to compute productivity. At the same time, the plan must spell out “where” and “when” during the process the measurements will be made. Some measurements are taken once, while others are made repeatedly and tracked over time. The time and frequency of collection are related to the goals and needs of the project, and those relationships should be made explicit in the plan. “How” and “Who” address the identification of tools, techniques, and staff available for metrics collection and analysis. It is important that the plan discusses not only what measures are needed but also what tools are to be used for data capture and storage, and who is responsible for those activities. Often, metrics plans ignore these crucial issues, and project members assume that someone else is taking care of the metrics business; the result is that everyone agrees that the metrics are important, but no one is actually assigned to get the data. Likewise, responsibility for analysis often slips though the project cracks on and data pile up but are never used in decision making. The plan must state clearly what types of analysis will be created with the data, who will do the analysis, how the results will be conveyed to the decision makers, and how they will support decisions. Thus, the metrics plan prints a comprehensive picture of the measurement process, from the initial definition of need to analysis and application of the results. Software processes have an important influence on the quality of the final software product, and for this reason companies are becoming more and more concerned with software process improvement. Successful management of the software process is necessary in order to satisfy the final quality, cost and time of the marketing of the software products (Kitchenham et al 2002). To improve the software processes (Pressman 2006), a great variety of initiatives have arisen like the Capability Maturity Model (CMM), etc. All these initiatives focus on
Application of Software Metrics in ERP Projects
Table 1. Metric-01 in the software metrics plan (SMP) Metric
Schedule Slippage
Formulae
{(Actual number of days – Estimated number of days) / Estimated number of days} * 100
Measure (Indirect Metric)
Actual number of days is the difference between the date when a particular activity (phase) or project got completed and the date when a particular activity/phase or project started. Estimated number of days is the difference between the planned start date of a particular activity/phase or project based on the latest plan and the planned end date of a particular activity/phase or project based on the latest plan.
Source of Data
Actual End Date: Project Schedule Expected Start Date: Project Schedule Expected End Date: Project Schedule
Collection Responsibility
Project Manager
Frequency of Analysis
Weekly Progress Review, Phase End (Milestone), Event Driven Analysis can be done using the Schedule Tracking Sheet
Significance to ERP Project
Used to control the project activities and ensure on-time delivery. Can be used for better estimation for future ERP projects leading to effective software process management.
software processes for developing, implementing or improving a quality management system. An ERP project involves software processes that will exist even after the implementation of the ERP system. In fact, the actual processes start only when the going alive is fixed during implementation. Traditional software projects have requirements collected from the customers for whom the software is developed. But in ERP projects, the software is made available to the customers readymade after fine tuning its functionality to match the exact requirements of the customers. The purpose of the SMP is to enhance the performance of ERP projects by im-
proving their software processes. The SMP will have a set of well-defined metrics that will deal with measuring the requirements stability, project schedule slippage and tracking the problems arising during the maintenance phase. The SMP can also be further extended involving other aspects like effort distribution, productivity, etc if desired. The SMP will contain the following: 1. )>> 2. )>> 3. )>> 4. )>>
Source of data and how it will be captured Periodicity of data capture Formulae to measure the software process Team member responsible for data collection
Table 2. Metric-02 in the software metrics plan (SMP) Metric
Requirements Stability Index (RSI)
Formulae
Number of Requirements Changed (Added/Deleted/Modified)/Total Number of Initial Requirements
Measure (Indirect Metric)
Number of requirements changed is the sum of the requirements that were either added/modified/deleted from the initial set of requirements approved by the client. Initial requirements are the number of requirements initially approved by the client.
Source of Data
Initial requirement: Initial set of approved requirements from the client. Number of changes in the requirement can be found from Client Request Form (CRF) and Client Complaint Form (CCF)
Collection Responsibility
Project Manager
Frequency of Analysis
At the end of every phase of ERP implementation using the RSI tracking sheet
Significance to ERP Project
RSI is used to monitor the magnitude of change in requirements it gives a picture of clarity of the requirements in the customers mind. RSI is useful for fine software process management.
1175
Application of Software Metrics in ERP Projects
Table 3. Metric-03 in the software metrics plan (SMP) Metric
Arrival Rate of Problem (ARP)
Formulae
Number of problems arrived (reported by the client) every month
Measure (Direct Metric)
Number of problems reported by the client in a particular month
Source of Data
Modification Request Form
Collection Responsibility
Project Manager
Frequency of Analysis
The problem arrival rate when compared with the problem closure rate gives an indication of the amount of work still pending and helps to anticipate the work for the next month
Significance to ERP Project
Used for effective software maintenance
5. )>> Nature of Measurement 6. )>> Significance of the proposed metric to the ERP Project.
is known as correlation. Positive or negative values of coefficient of correlation ‘R’ between two variables indicate positive or negative correlation. We have used Karl Pearson’s Coefficient of correlation which is defined below. Note that R has no units and is a mere number. If there is some relationship between two variables, their scatter diagram will have points clustering near about some curve.
The following tables (From Table 1 to Table 6) collectively form the Software Metrics Plan (SMP). In this SMP, six software metrics have been defined, each one dealing with a specific software process leading to successful software process management. The goal of this study is to establish the significance of software metrics in ERP projects. We use correlation and multiple correlations (Levine et al 1999) to test the two hypotheses formulated below to investigate the impact of one metric over another. The relationship between two variables is such that a change in one variable result in a positive or negative change in the other and also greater change in one variable results in a corresponding greater change in the other. This
1 (∑ X i *Yi -X*Y) N R= 1/2 1/2 )>> 1 1 2 2 2 2 N ∑ X i -X N ∑ Yi -Y
(1)
Equation (1) is used to compute the correlation coefficient R. Multiple correlations are used to find the degree of inter-relationship among three or more variables. The objective of using multiple correlations is to find how far the dependent vari-
Table 4. Metric-04 in the software metrics plan (SMP) Metric
Closure Rate of Problems (CRP)
Formulae
Number of problems closed each month based on severity
Measure (Direct Metric)
Number of problems closed in a particular month
Source of Data
Modification Request Form
Collection Responsibility
Project Manager
Frequency of Analysis
The problem closure rate when compared with the problem arrival rate gives an indication of the amount of work still pending and helps to anticipate amount of work which can be completed next month
Significance to ERP Project
Used for effective software maintenance
1176
Application of Software Metrics in ERP Projects
Table 5. Metric-05 in the software metrics plan (SMP) Metric
Age of Open Problems (AOP)
Formulae
Sum of time (days per month) that problems have been open/No. of open problems per month
Measure (Direct Metric)
Number of problems open in a particular month of each severity
Source of Data
Modification request form
Collection Responsibility
Project Manager
Frequency of Analysis
The age of open problem gives the time for which a problem of a particular severity remains open and helps in setting realistic schedule estimates.
Significance to ERP Project
Used for effective software maintenance
Hypothesis 2: The metrics AOP, ACP determine maintenance efforts (ME) (i.e) person-hours.
able is influenced by the independent variables. We denote the multiple correlations between x1, the dependent variable and x2, x3,…xn, independent variables, by R1.234…n. The multiple correlation coefficient of x1 on x2 and x3 is the simple correlation coefficient between the observed value of x1 and its estimated value b12.3 x2 + b13.2x3 denoted by €1.23. We denote the multiple correlation of x1 on x2 by R1(23). Equation (2) gives the value of R1(23).
In the traditional software projects, from the view point of software engineering, there are five phases in the software development life cycle (SDLC). They are: Requirements Analysis, Design, Coding, Testing and Implementation. ERP is a packaged software and its implementation in an enterprise involves six phases called the ERP implementation lifecycle (Alexis Leon 2005). They are: Requirements Analysis, Gap Analysis, Reengineering, Configuration Management, Testing and Maintenance. The RSI value and the SS value for these phases for the ERP project A are shown in the Table 7. The requirements of the customers are collected and analyzed during the requirements analysis phase. The gap analysis phase is the step of negotiation between the cus-
R1(23) = Cov(x1, €1.23)/(√[Var(x1)] * √[Var(€1.23)] )>>(2) The following two hypotheses are formulated to focus our study on the usage of the software metrics defined in the SMP for effective software process management during the ERP implementation and to facilitate statistical analysis. Hypothesis 1: The metric RSI influences the metric schedule slippage (SS) Table 6. Metric-06 in the software metrics plan (SMP) Metric
Age of Closed Problems (ACP)
Formulae
Sum of time (days per month) that problems have been closed/No. of closed problems per month
Measure (Direct Metric)
Number of problems closed in a particular month of each severity
Source of Data
Modification Request Form
Collection Responsibility
Project Manager
Frequency of Analysis
The age of closed problems gives the time taken to close a problem of a particular severity and helps in setting realistic schedule estimates.
Significance to ERP Project
Used for effective software maintenance
1177
Application of Software Metrics in ERP Projects
Table 7. RSI and SS values for an ERP Project Á Phases in ERP implementation
Requirements Analysis
Gap Analysis
Reengineering
Configuration Management
Testing
Maintenance
Requirements Stability Index (RSI)
0.27
0.15
0.076
0.63
0.59
0.7
Schedule Slippage (SS)
0.11
0.059
0.072
0.29
0.36
0.51
tomer’s requirements and the functions the ERP package possesses (Alexis Leon 2005). Reengineering is defined as the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical, contemporary measures of performance, such as cost, quality, service and speed (Alexis Leon 2005). Table 7 provides the RSI and SS values computed based on the data from ERP Project A. This Project A was done by a medium scale software company involved in the development of the ERP software for a manufacturing industry at a lower cost with standard configurations. It is not a large scale information system project like the one being done by the leading ERP vendors SAP, Oracle, etc., But from the research perspective, these data sets are found to be useful to validate the hypotheses. The coefficient of correlation ‘R’ between RSI and SS is calculated as 0.9406. The value of R, being greater than zero, indicates a positive correlation between the two variables RSI and SS. This indicates that the requirements stability index has an impact over the schedule slippage of the phases during the ERP implementation. Hence, a Metric-02 (shown in Table 2) has been defined to monitor the RSI. Data obtained from the same project A were used to compute the simple correlations between the metrics ME (x1), AOP (x2) and AOC (x3) as r12 = 0.863, r13 = 0.648 and r23 = 0.709 respectively. The multiple correlation R1 (23) computed using the Equation (1) gives 0.864. Since R1 (23) is very large, it follows that the variables x2 and x3 have considerable influence on the variable x1. In other words, regression equation of x1 on x2 and x3 will be excellent.
1178
This data analysis gives a great deal of support to the two hypotheses formulated in this study. The clearest result observed from the value of R and R1 (23) is: (i) The strong relationship between RSI and SS; (ii) The influence of the Age of Open Problems (AOC) and the Age of Closed Problems (ACP) over the Maintenance efforts (ME). The most reasonable inference that can be drawn from this study is the considerable influence of the software processes in boosting the performance of the ERP projects, because it helps the ERP team to effectively monitor the important software processes during the ERP implementation.
DISCUSSION Measurement enables us to gain insight into the process and the project by providing a mechanism for objective evaluation. Measurement can be applied to the software process with the intent of improving it on a continuous basis. Measurement can be used throughout a software project to assist in estimation, quality control, productivity assessment and project control. Measurement is a management tool. If conducted properly, it provides a project manager with insight. And as a result, it assists the project manager and the software team in making decisions that will lead to a successful project. The intent of the SMP is to provide a set of process indicators that lead to longterm software process improvement. The only rational way to improve any process is to measure specific attributes of the process, develop a set of meaningful metrics based on these attributes, and then use the metrics to provide indicators that will
Application of Software Metrics in ERP Projects
lead to a strategy for improvement. SMP enables the software engineering team to take a strategic view by providing insight into the effectiveness of a software process. Many organizations jump into the implementation without defining the project in “bite-sized chunks” that can be accomplished in a reasonable period of time. As schedules drag on and requirements are heaped on the initial phase, the customer loses faith in the initiative and organizational inertia can take hold. If requirements are managed scrupulously and reflected in the form of clearly articulated scope elements, the entire project is more likely to succeed, and chances are better for its ultimate adoption and survival. Two kinds of observation are noted in this research study on using the software metrics for managing the software processes of the ERP projects. One observation is based on the statistical analysis and another observation is related to the usefulness of software metrics in ERP projects. Statistical analysis shows that the SMP proposed in this study plays a significant role in the successful ERP implementation. There is sufficient evidence from the literature review that the ERP projects are smashed heavily by poor software process management, especially the varying requirements due to organizational changes and customization. These varying requirements lead to schedule slippage. It is apparent that large maintenance efforts are required for ERP projects as the real software processes get life only after ERP implementation. It is evident that the proposed software metrics in the SMP is a pragmatic, useful method for improving the software processes in ERP projects, because it enables the ERP team to resolve the important issues such as requirements instability, schedule slippage and efforts required for maintenance. Furthermore, the SMP determines decisions for the ERP team and provides against the possibilities of various defects that might arise during or after the implementation. In simple, the SMP helps the ERP team to keep track of their project and pilot it
in the right path as and when there is a deviation. The results obtained here underscore the importance of effective software process management during the ERP implementation. A number of software processes are involved in the software development and its maintenance. The six processes that require monitoring and effective management are considered in this study and a SMP has been developed consisting of a set of software metrics to quantitatively measure the software processes. A software process is the collection of processes for the various tasks involved in executing projects for building software systems. As a result of changes in technology, knowledge, and people’s skill, the processes for performing different tasks change. In other words, processes evolve with time. With knowledge and experience, processes can, and should, be “fine tuned” to give better performance. Software process management is concerned with this tuning of the software process. There are many published reports showing the benefits the software process management can bring to quality, productivity, and cycle time of ERP projects. Currently, most organizations that embark upon a software process management program tend to use a framework like the quality models for their process improvement. From the application of the SMP to an ERP project A, it can be seen that it is a straight forward approach to manage the crucial software processes and help the ERP implementation team to improve it. A future research study could compare the performance of various ERP projects using this SMP with those not using the SMP and the application of other statistical techniques like multiple linear regressions, etc. If the software metrics are linked with some software quality factors, then the performance of the project can be further improved (Mohammad Aishayeb 2003). Hence generating suitable metrics to consider quality aspects of software processes will strengthen the developed software product. It is also proposed to develop a software tool to execute SMP. In the
1179
Application of Software Metrics in ERP Projects
future, we plan to generate a project database to make the results of SMP from each ERP project publicly available for the ERP team.
Fenton, N. E., & Pfleeger, S. L. (2003). Software Metrics-A rigorous and Practical approach. New York: Thomson Publisher.
REFERENCES
Glass, R. L. (1998). Enterprise Resource Planning—Breakthrough and/or term problems. The Data Base for Advances in Information Systems, 29(2), 14–16.
Aishayeb, M., & Li, W. (2003). An empirical validation of object-oriented metrics in two different iterative processes. IEEE Transactions on Software Engineering, 29(11).
Holland, C. P., & Light, B. (1999). A critical success factors model for ERP implementation. IEEE Software, 16(3), 30–36. doi:10.1109/52.765784
Alexis, L. (2005). Enterprise resource planning. New Delhi, India: Tata McGraw-Hill.
Humphrey, W. (2005). Managing the software process. Reading, MA: Addison-Wesley.
Ambler, S. W. (2002). Agile Modeling: Effective Practices for Extreme Programming and the Unified Process. New York: Wiley.
Kitchenham, B. A., Huges, R. T., & Linkman, S. G. (2002). Modeling software measurement data. IEEE Transactions on Software Engineering, 27(9), 788–804. doi:10.1109/32.950316
Arthur, L. J. (1997). Quantum improvement in software system quality. Communications of the ACM, 40(6), 47–52. Beck, K. (1999). Embracing changes with extreme programming. Computer,Embracing change with extreme programming, 70–77. doi:10.1109/2.796139
Levine, D. M., Berenson, M., & Stephan, D. (1999). Statistics for managers. Upper Saddle River, NJ: Prentice Hall. Lucus, H. C. (n.d.). Implementation: The Key to Successful Information Systems. Columbia Management, 11(4), 191–198.
Carmel, E., & Sawyer, S. (1998). Packaged software development teams: what makes them different? Information Technology & People, 11(1), 7–19. doi:10.1108/09593849810204503
Luo, W., & Strong, D. M. (2004, August). A framework for evaluating ERP implementation choices. IEEE Transactions on Engineering Management, 51(3). doi:10.1109/TEM.2004.830862
Chidamber, S. R., & Kemerer, C. F. (1994). A metrics suite for object-oriented design. IEEE Transactions on Software Engineering, 20(6). doi:10.1109/32.295895
Parthasarathy, S., & Anbazhagan, N. (2007). Evaluating ERP Implementation Choices using AHP. International Journal of Enterprise Information Systems, 3(3), 52–65.
Davenport, T. H. (1998). Putting the enterprise into the enterprise system. Harvard Business Review, 76(4), 121–131.
Pressman, S. (2001). Software engineering-A practioner’s approach. New Dehli, India: Tata McGraw-Hill.
Dunsmore, H. E. (1984). Software metrics: an overview of an evolving methodology. Information Processing & Management, 20, 183–192. doi:10.1016/0306-4573(84)90048-7
Robey, D., Ross, J. W., & Boudreau, M. C. (2002). Learning to implement enterprise systems: An exploratory study of the dialectics of change. Journal of Management Information Systems, 19(1).
1180
Application of Software Metrics in ERP Projects
Sambamurthy, V., & Kirsch, L. J. (2000). An integrative framework of the information systems development process. Decision Sciences, 31(2), 391–411. doi:10.1111/j.1540-5915.2000. tb01628.x
Stensurd, E., & Myrtveit, I. (2003, May). Identifying high performance ERP projects. IEEE Transactions on Software Engineering, 29(5).
Sharma & Goyal. (1994). Mathematical statistics. Meerut, India: Krishna Prakashan Mandir.
This work was previously published in Enterprise Information Systems and Implementing IT Infrastructures: Challenges and Issues, edited by S. Parthasarathy, pp. 51-60 , copyright 2010 by Information Science Reference (an imprint of IGI Global).
1181
Section V
Organizational and Social Implications This section includes a wide range of research pertaining to the social and organizational impact of enterprise information systems. Chapters included in this section analyze the impact of power relationships in system implementation, discusses how enterprise systems can be used to support internal marketing efforts, and demonstrate that perceived shared benefits, system characteristic, and the degree of knowledge of the system are significant influences on an individual’s willingness to use enterprise resource planning systems. The inquiries and methods presented in this section offer insight into the implications of enterprise information systems at both a personal and organizational level, while also emphasizing potential areas of study within the discipline.
1183
Chapter 5.1
Optimization of Enterprise Information System through a ‘User Involvement Framework in Learning Organizations’ Sumita Dave Shri Shankaracharya Institute of Management & Technology, India Monica Shrivastava Shri Shankaracharya Institute of Management & Technology, India
AbstrAct Enterprise resource planning (ERP) today is being adopted by business organizations worldwide with a view to maximize their capabilities. But more often than not the expected outcomes are not delivered due to inaccurate calculations with respect to the organization’s ability to adapt to the change. Although the benefits of enterprise information systems in streamlining the functions of the organization cannot be questioned, preparing the organization to adopt the new system needs more focused efforts. In order to ensure that the existing capabilities of the organizations are an enabler and not an inhibitor in the adoption process, they need to be learning organizations. A study was conducted in Bhilai Steel Plant (BSP), one of the leading DOI: 10.4018/978-1-60566-723-2.ch005
steel manufacturing public companies in India, where ERP is to be adopted. In spite of the fact that it has a strong backbone of resources in terms of information technology (IT) infrastructure, the implementation process is virtually on a standstill. In this chapter, an evaluation of the psychological capabilities of the organization is done. This can be evaluated through the mindset of the workforce and the willingness with which they are ready to adopt change.
INtrODUctION Information technology Information Technology is the key driver for change and is instrumental in the creation of lean organizations where technology fully supports the implemen-
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Optimization of Enterprise Information System
tation of quality enhancement techniques to meet the growing demands of competition. Moreover the competitive pressures and escalating maintenance costs is pressuring organizations to replace the legacy system of operations. The envisioned benefits of IT enabled change is the enhancement of competitive ability through the networking of geographically distant work groups and a more effective utilization of man, material and machine. While evaluating the benefits of enterprise information systems, the explicit outcome is change in the organization’s system as a whole to implement the new practices and processes and ideas. With the introduction of a knowledge base, the challenge for the organization gets magnified as the perceived flexibility when evaluated in physical terms may be accurate but may fall short in meeting the much needed psychological flexibility. Hence, ERP and other forms of IT enabled solutions, which are being widely adopted with a view to maximize capabilities, are not able to deliver the expected outcomes due to such inaccurate calculations. The implementation of any IT enabled operations systems requires a systematic approach which includes the evaluation of the organization’s learning capabilities. Hammer and Champy (1993) focused on IT based organizational reengineering. Their vision can be summarized along the following points. 1. 2.
3. 4.
5.
1184
Radical transformation: It is time consuming and does not happen overnight. Changes come from a clean slate through the conceptualization of gradual work arrangements unlike total quality management. The focus of change should be process based. The change needs to be initiated at the top and then directed downwards throughout the organization. and Seamless access to information to one and all.
Hence, in order to ensure that the IT enabled change acts as an enabler of growth, it becomes necessary to evaluate the firm’s learning capabilities. Organizational learning takes place when successful organization learning is transferred to an organization’s shared beliefs. Learning is the key competency required by any organization that wants to survive and thrive in the new knowledge economy. As organizations grow old though they accumulate competencies, resources and knowledge, there is a possibility that their structures become a hindrance to their ability to respond to the challenges posed by the competition. A constructivist-learning environment is a place where people can draw upon resources to make sense out of things and construct meaningful solutions to problems. It emphasizes the importance of meaningful, authentic activities that help the learner to construct understandings and develop skills relevant for solving problems. “Make learning part of every day office environment” is the mantra to survive in this competitive world. The Learning Organization is one that learns continuously and transforms itself. Learning takes place in individuals, teams, the organizations, and even the communities with which the organizations interact. Learning results in changes in knowledge, beliefs, and behaviors. Learning also enhances organizational capacity for innovation and growth. The Learning Organization has embedded systems or mechanisms to capture and share learning. Thus organizational learning is an important part of Organizational Transformation process.
Enterprise Information system An Enterprise Information System (EIS) is a type of management information system made to facilitate and support the information and decision making needs of senior executives by providing easy access to both internal and external information relevant to meeting the strategic goals, of the
Optimization of Enterprise Information System
organization. It is commonly considered as a specialized form of a Decision support system. EISs are defined as computerized information systems designed to be operated directly by executive managers without the need of any intermediaries. Their aim is to provide fast and easy access to information from a variety of sources (both internal and external to the organisation). They are easily customizable and can be tailored to the needs and preferences of the individual executive using it. They deliver information of both soft and hard nature. This information is presented in a format that can be easily accessed and most readily interpreted. This is usually achieved by the utilization of multiple modes of accessing data and the use of Graphical User Interfaces (GUIs). Choosing the appropriate software is vital to designing an effective EIS. Therefore, the software components and how they integrate the data into one system are very important. According to the Wikipedia, the basic software needed for a typical EIS includes four components: 1. 2.
3.
4.
Text base software. The most common form of text is probably documents; Database. Heterogeneous databases residing on a range of vendor-specific and open computer platforms help executives access both internal and external data; Graphic base. Graphics can turn volumes of text and statistics into visual information for executives. Typical graphic types are: time series charts, scatter diagrams, maps, motion graphics, sequence charts, and comparisonoriented graphs (i.e., bar charts); Model base. The EIS models contain routine and special statistical, financial, and other quantitative analysis.
Enterprise Resource Planning (ERP) helps to cater to the changing information requirements of the Enterprise Information System very efficiently and effectively. An EIS can provide key
indicators of company performance, based on the information in ERP database and external factors.
LEGAcy systEMs The phrase “legacy replacement” has crept its way into almost everyone’s vocabulary throughout both the business and technical ranks. Generally legacy systems replacement term is used to imply improvement, or elimination of the negative or obsolete. According to the dictionary, replace means “provide an equivalent for” but the new ‘IT’ based solutions provide much more than the older systems. Enterprise wide system replacements can be extremely disruptive to the entire organization, with catastrophic results if not well implemented. Flexibility, speed, immediate access and anywhere-anytime computing are no longer luxuries. They are now the necessities of interacting with customers in an on-line environment. Use of internet and the availability of information online have left no stones unturned to increase customer expectations for immediate service and this relatively new expectation has left many businesses scrambling to adopt modern technology like ERP in order to meet the demand. During the past few years, business houses have gone through a technology and process learning curve to respond to these new service demands. For a multitude of technical reasons, these legacy systems simply cannot be adequately retrofitted to address current and future business requirements in a cost- or time-effective manner.
success story Due to Legacy systems Bhilai Steel Plant, a unit of Steel Authority of India Limited (SAIL), a Navratna, is one of the most profits making steel industry in India. It was established in the year 1958. Many legacy systems are working in the organization presently, some of
1185
Optimization of Enterprise Information System
Figure 1. (Adapted from C&IT Dept, Bhilai Steel Plant)
which are going to be replaced by the ERP system soon. (see Figure 1) Studies have revealed that the rate of success of IT enabled change is rather low. Although, IT enabled change has significantly altered the managerial paradigms within the organization with a view to raise the organization to a higher plane of performance through a greater involvement of the employees at large, by functionally integrating the various departments, it was found in a study conducted by us that success begets success may not always hold true. Some of the areas where legacy systems are currently working and running successfully in BSP are shown in Table 1. Over the years substantial data has been collected which is helping the decision making with respect to different business processes. One of the most extensive System of Bhilai Steel Plant is the Materials Management System. This system is as old as computerization in BSP. It started with IBM mainframes and tape drives and graduated to VAX VMS based system, developed with the
1186
help of the Digital Company, got transformed on introduction of SUN Servers, Oracle database and Developer 2000 and finally went on the web with Oracle 10g, Apache Web Servers and JSP, all developed in-house. The Materials Management System was developed to take care of procurement of Stores and Spares, quality management, logistics management, bill processing, bill payment and inventory management of these procured items. To facilitate the system, masters like material master, vendor master, etc were extensively developed and sections identified to maintain them. With the introduction of ATM network across the Plant, the systems were extended beyond the boundaries of Materials Management, to the end users. Planning sections across the Plants were able to prepare Purchase Requisitions and send them online through the approval process to the Purchase Department for procurement action. Customized forms were developed in Legacy system to cater to this. They were also able place reservations for items lying in the central stores. Various queries
Optimization of Enterprise Information System
Table 1. Groups
Areas
Benefits
Financial Management
Finance, VMS, Oprn. A/c, RMBA, EFBS, Cost Control, Assets, Braman, ORAMS, E-Payment etc.
Timely closing of accounts, online details of financial Transactions, Online CPF Loan, Faster & accurate payment to parties through e-payment, Advance processing of Employee Tours etc.
Sales and Order progressing
OASIS, CISF Manpower Deployment System, PRAHARI, VATACS etc.
Online Invoice Preparation Vehicle monitoring, Daily Duty Chart for CISF, etc
Employee and Estate Services
Payroll for regular & DPR employees, Estate Services, CPF & Allied Jobs, Time Offices, Leave Accounting, VRS Module, Final Settlement module etc.
Qtr allotment, Salary payment on time, e-TDS, HB Advances and other Loans & recoveries, CPF Ledger, Estate Third Party Billing, Incentives Calculation, VRS details etc
Personnel Administration and Hospital Management and Web Services
HRIS, HRD, HMS, Contract Mgmt, Contract Labour System, UDAN, DATES, PADAM, DBA Activities, Assignment Monitoring System, Web Services etc.
Personal details, LTC/LLTC details, Training Details, Patients & Pharmacy accounting, All details & monitoring of Contacts in non works areas, Registration & online attendance system, Various departmental home pages etc
Networking, Hardware and Procurement
H/w Maintenance, Planning & Procurement, Networking and Computer Operation.
Hardware Complaints & monitoring, Procurement of Computer Assets and consumables including stationeries, 24 x 7 computer operation and monitoring of servers, Various reports for Users and Top Management etc.
were provided to end users thereby improved the visibility of stock and reduced over indenting of stores and spares. This had a tremendous impact on the inventory turnover ratio which has considerably gone down. With the introduction of the concept of Door Delivery through the Store Issue Note Generation System (SINGS), a number of material chasers were redeployed for other jobs in the shops as the responsibility of delivery of stores and spares was shifted to the stores department which carried out this function centrally. The vehicles used by various shops were redeployed. Thus due to introduction of this system there was substantial reduction of manpower and vehicle requirement in the shops. With the complete Purchasing activity on the system, the Purchasing lead time got considerably reduced. The introduction of in-house E – Procurement system was a major step forward for the Materials Management System as with this the legacy system was treading into the challenging Internet world where encryption, security & transparency were the new challenges. This again was successfully implemented and has gone a long
way in reducing the Purchasing Lead time thereby saving a lot of revenue for the company. Other units of SAIL have outsourced this application through a company Metal Junction. The in-house development and maintenance of the E – procurement solution has saved a lot of outflow of revenue of the company. About 65% of the total volume of procurement is being carried out through the E – procurement route. Computerization of Store Bills and Store Accounting section and linking it to the Materials Management System lead to a huge reduction of manpower who were required before to process bills manually. The payment lead time also got considerably reduced thereby winning the goodwill of the suppliers of the company. The introduction of E – Payment has further reduced the lead time and the manpower required. With the built-up of database of transactions and historical data, a number of analyses are helping the company take correct decisions. Inventory analysis, ABC analysis, XYZ analysis, lead time of procurement of items, analysis of non moving items and slow moving items has helped in reducing inventory and also take informed decisions during procurement.
1187
Optimization of Enterprise Information System
Lately the control of Procurement Budget and Consumption Budget has been strengthened and regular monitoring is being done by the higher management. Another area of concern of the management – Shop Inventory has been addressed by a legacy system which monitors the shop floor inventory of BSP. All these are possible only due to the strong legacy system in Bhilai Steel Plant. With the introduction of various legacy systems in the company, executives and non executives were forced to take up learning computers. This has improved the skill set of employees and also of the developers of the legacy systems who have to continuously upgrade themselves to match the pace with the industry. With the introduction of E – procurement, even the small suppliers too have taken up computers and are hugely benefitting out of this.
Limitations of the Present system in bsP The legacy systems being used in the plant are able to help meet the demands of the plant in the present scenario, but in days to come with the advent of newer technologies and new competition the present system will not be that effective. Hence, the management has decided to implement Enterprise Resource Planning in order to meet the changing demands of the market. The present systems have been made to cater to demands of a particular area and hence are not holistic in nature. Because of this, there is no centralized database and the systems are not integrated, resulting in less user involvement and interaction with the system.
Need for change Most organizations across the world have realized that in a rapidly changing environment, it is impossible to create and maintain a custom designed or tailor made software package which
1188
will cater to all their requirements and also be completely up-to-date. Realizing the requirement of such organizations, software like Enterprise Resource Planning solution incorporating best business practices have been designed, which will offer an integrated software solution to all the business functions of an organization. In the ever growing business environment the following demands often plague an organization: • • • • • •
Cost control initiatives Need to analyze costs or revenues on a product or customer basis Flexibility to respond to ever changing business requirements More informed management decision making by seamless information flow Changes in ways of doing business Simulate the complexity of variables effecting businesses
Difficulty in getting accurate data, timely information and improper interface of the complex natured business functions have been identified as the hurdles in the growth of any business. Time and again depending upon the speed of the growing business needs, many solutions have come up and ERP is one of such solution. Some of the features of ERP which are compelling organizations to make it part and parcel of their business activities are: •
• •
It facilitates company-wide Integrated Information System covering all functional areas like Manufacturing, Selling and distribution, Payables, Receivables, Inventory, Accounts, Human resources, Purchases etc., It performs core activities and increases customer service. It bridges the information gap across the organisation i.e. results in seamless flow of information.
Optimization of Enterprise Information System
•
•
•
•
•
•
It supports multiple platforms, multiple currency, multimode manufacturing and is multi-linguistic in nature. It provides for complete integration of Systems not only across the departments in a company but also across the companies under the same management. It allows automatic introduction of latest technologies like Electronic Fund Transfer, Electronic Data Interchange (EDI), Internet, Intranet, E-Commerce etc. It not only addresses the current requirements of the company but also provides the opportunity of continually improving and refining business processes. It integrates all the levels of information system and provides business intelligence tools like Decision Support Systems (DSS), Executive Information System (EIS), management dashboards, and strategic enterprise management, Reporting, Data Mining and Early Warning Systems (Robots) for enabling people to make better decisions and thus improve their business processes. It also enables collaborative processes with business partners and links suppliers, organization and customers more effectively.
In order to reap the benefits and implement change effectively the companies need to be learning organizations. Table 2. (Adapted from FORUM Journal Volume 93.1) Learning Organizations Respond to environment change Tolerate stress Compete Exploit new niches Take risks/ mutate Develop symbiotic relationships
Organization Learning “Organizations where people continually expand their capacity to create the results they truly desire, where new and expansive patterns of thinking are nurtured, where collective aspiration is set free, and where people are continually learning to learn together” (Peter Senge, 1990)
Learning organizations are those that have in place systems, mechanisms and processes, that are used to continually enhance their capabilities and those who work with it or for it, to achieve sustainable objectives - for themselves and the communities in which they participate. (see Table 2) According to Mr. Udai Parekh, “The concept of learning organization is a natural extension of organizational learning”. Organizations today are changing in terms of values, structures, processes and expectations. We need to help the employees prepare for the living in the new organizations. In the new organizations, it is expected that employees have substantial content knowledge in their work specialization and be well prepared in the process and behavioral dimensions of their experiences in the changing organizations. The process and behavioral dimensions such as effective communication skills and negotiation skills are attributes that play a vital role in individual and group learning. Individuals at all levels in the organization must combine the mastery of some technical expertise with the ability to interact effectively with customers and clients, work productively in teams and critically reflect upon and then change their own original practices. Learning organizations creates a language that is ideal, an approach, a vision to move towards a type of organization that one wants to work in and which can succeed in the world of increasing change and interdependency. A learning organization requires a basic shift in how we think and interact and these changes go to bedrock assumptions and habits of our culture.
1189
Optimization of Enterprise Information System
Key Variables According to Mr. Udai Parekh, the key variables which play a very important role during organization learning are:1. 2. 3. 4. 5. 6. 7. 8.
Holistic frame (X1) Strategic thinking (X2) Shared vision (X3) Empowerment (X4) Information Flow (X5) Emotional maturity (X6) Learning (X7) Synergy (X8)
Holistic framework (X1): Learning should be done within holistic framework, i.e, by taking into account the environment in which the organization functions, putting emphasis on the causatives of the problem rather than the symptoms, learning not just for short term gains but for a vision, by understanding the relationships and interrelations between the various facets of an organization. Strategic thinking (X2): careful strategies should be design in terms of which areas to be targeted first, what would be the implications of each step, clearly defining the roles and policies of the organization, creating an environment which will support learning. The strategy should be well communicated at all levels to ensure its success. Shared vision (X3): Vision should be developed by employee’s participation. Communicating vision and freezing it is also an important aspect, which needs to be taken into consideration. The top management must lay emphasis on the creating an environment full of transparency, motivation and help nurture creativity and leadership traits. Empowerment (X4): creating an environment where there are decentralized structures which enable more participation, more power, a culture of trust, faster decision making, rewards and incentives is very important aspect and cannot be ignored.
1190
Information Flow (X5): Transparency in working, seamless flow of information, sharing of critical information at all levels, minimum grapevine, encouraging development by sharing is a very important aspect. Emotional maturity (X6):. An environment where integrity, discipline, devotion, teamwork, transparency, mutual respect and moral responsibility should be created which will help to evolve a culture of trust thus, enabling the employees to be more loyal, committed, able to control their emotions and keep the organizations achievement a step higher than their own personal achievements. Learning (X7): A learning environment encourages self-development allowing space for discussions, enquiries, freedom of speech and reward schemes. The management should felicitate people who are ready to accept the change. Synergy (X8): The dynamic energetic atmosphere is created when participants interact and productively communicate with each other and in groups. The cooperative efforts of the participants create an enhanced combined effect compared to the sum of their individual effects. An effort for coordinated activities, enhanced teamwork, crossfunctional teams, is a must for the organizations to be called learning organizations. Bhilai Steel Plant, a unit of SAIL with strength of 34,800 employees, is an integrated steel plant with end-to-end processes of generation of Raw Materials to selling the finished goods. Different departments have different systems with less of integration resulting into duplicity and limited information flow. With the competitive business scenario and the need of advanced functionalities like Strategic Enterprise Management, Business Information Warehousing, Business Process Simulation, etc. the need of a standard worldclass product is being felt which could enhance effectiveness of the processes and functions within the system. The existing bureaucratic culture is impairing holistic framework, empowerment, organization climate, shared vision, which is
Optimization of Enterprise Information System
Level of Consideration: 5% level of significance
making employees feel less motivated and hence the overall learning process has taken a backseat. The major players of learning process, namely structure, processes and information flow in the current scenario is becoming a deterrent in making the company, outsmart its competitors and face the change in economy. Hence an LOP survey was done on BSP to identify the key areas where improvement is possible to enhance the learning capabilities of the organization so as to facilitate a smooth adoption of technologies.
Hypothesis H0: All parameters have the same potential according to the employees (X1=X2=X3=X4=X5=X6=X7=X8) H1: All the parameters do not have the same potential according to the employees.
research Methodology
(Xi ≠ Xj), (where i, j =1,2,3,4,5,6,7,8)
Sampling: Simple Random sampling was undertaken on Bhilai Steel Plant and five (5) major departments namely; materials management, plant maintenance, purchase, finance and quality management were covered.
Assumption 30 points are assumed to be the standard value of parameters (where x >= 30) (see Table 3)
Sample Size: 50 Executives.
Findings
Data Collection Period: Prior to the implementation of ERP.
The X1, X3 and X6 (Holistic framework, shared vision, emotional maturity) parameters are below 30 (all respondents hold the same opinion.)
Statistical Instruments Used: Single factor ANOVA to test whether the 8 parameters are significantly different or not. Table 3. ANOVA: Single factor For Elements X1, X3 and X6 SUMMARY Groups
Count
Sum
Average
Variance
X1
50
1463
29.26
100.32
X3
50
1411
28.22
125.28
X6
50
1409
28.18
99.46
Source of Variation
SS
df
MS
F
P-value
F crit
Between Groups
37.49
2
18.75
Within Groups
15927.58
147
108.35
0.17
0.84
3.06
Total
15965.07
149
ANOVA
H0: Accepted Conclusion: X1, X3 and X6are not significantly different at 5% level of significance
1191
Optimization of Enterprise Information System
Inference It is concluded that parameters X1, X3 and X6 are really below the assumed value indicating that they are weak parameters. All the employees support this opinion (refer to table 1). The rest five parameters (strategic thinking, empowerment, information flow, emotional maturity, learning and synergy) are above 30 (all respondents hold the same opinion.) (see Table 4) Inference It is further concluded that parameters X2, X4, X5, X7, X8, are the strong parameters and all the employees support this. It can be supported by table –3 where all parameters are taken together and all the employees have different opinions which suggests that there is bipolarization of opinion. (see Table 5) Inference From 1-8 parameters, if ANOVA is conducted at a time all employees hold different opinions about different parameters. It can be observed that BSP lacks
1. 2. 3.
A holistic framework of operations Shared vision – free flow of communication, hence transparency and Emotional maturity – Poor teamwork and lack of commitment and mutual respect
suggestions In order to overcome the identified deficiencies and to induce organization learning in a better way, it is essential that BSP adopts practices in order to make ground more fertile to adopt ERP thus, initiate the process of transformation. It needs to 1. 2. 3. 4. 5.
Search for novel solutions Give and seek information Honour the contributions of others Seek and give evaluation Question basic assumptions and practices
In order to encourage experimental learning and collaborative teamwork which is both problem finding as well as problem solving, suggested methods to stimulate creative learning include
Table 4. For Elements X2, X4, X5, X7 and X8 SUMMARY Groups
Count
Sum
Average
Variance
X2
50
1657
33.14
89.43
X4
50
1614
32.28
135.96
X5
50
1561
31.22
165.73
X7
50
1858
37.16
217.16
X8
50
1588
31.76
181.66
Source of Variation
SS
df
MS
F
P-value
F crit
Between Groups
1124.344
4
281.09
Within Groups
38706.520
245
157.99
1.78
0.13
2.41
Total
39830.864
249
ANOVA
H0: Accepted Conclusion: X2, X4, X5, X7 and X8 are not significantly different at 5% level of significance
1192
Optimization of Enterprise Information System
Table 5. For All Elements from X1 to X8 SUMMARY Groups
Count
Sum
Average
Variance
X1
50
1463
29.26
100.32
X2
50
1657
33.14
89.43
X3
50
1411
28.22
125.28
X4
50
1614
32.28
135.96
X5
50
1561
31.22
165.73
X6
50
1409
28.18
99.46
X7
50
1858
37.16
217.16
X8
50
1588
31.76
181.66
Source of Variation
SS
df
MS
F
P-value
F crit
Between Groups
3110.0975
7
444.30
Within Groups
54634.1000
392
139.37
3.19
0.003
2.03
Total
57744.1975
399
ANOVA
H0: Rejected Conclusion: Elements from X1 to X8 are significantly different at 5% level of significance
1. 2.
RAT (Role Analysis Transaction) Diagnostic Window
RAT is a tool for improving the effectiveness of work groups. It helps to clarify the role expectations; i.e. the expectations the members of the work group have for their own performance and for the performance of other group members. The role requirements are determined by consensus, which ultimately results in more effective and mutually satisfactory performance. It enhances participation and collaboration, hence enhanced holistic framework, shared vision and emotional maturity. Diagnostic Window – It helps the groups to identify important issues and problems. The issues are discussed according to the following quadrants shown in Table 6. The idea is to reach a consensus on issues and the likelihood of change where it is required and help the group to define some action plans to address the needed change. This activity not only encourages participation and communication
but also enhances commitment thus stimulating critical thinking about the organization needs and priorities. RAT and Diagnostic window can be most helpful in conditions of structural change such as that of BSP as they help to address personal and behavioral adjustments to the creation of change.
cONcLUsION To be able to maintain the market share there is a definite need for change and the organizations with is capable of learning are able to implement the same in more effective manner. Change is accompanied by resistance which can be managed Table 6. Amenable to change
Potential
Operational
Not Amenable to change
Disaster
Temporary
1193
Optimization of Enterprise Information System
by proper user training and communication. Thus, although it is very necessary for organizations to adopt IT enabled operations; a proper analysis of the existing legacy system needs to be done in the light of the existing system so as to ensure full and timely acceptability of the new technology.
Dayal, I., & Thomas, J. M. (1968). Role analysis technique, operation KPE: Developing a new organization. The Journal of Applied Behavioral Science, 4(4), 473–506. doi:10.1177/002188636800400405
rEFErENcEs
Gupta, V. (2004). Transformative organizations: A global perspective. Response Books.
Amburgey, T., Kelly, D., & Barnett, W. P. (1993). Resetting the clock: The dynamics of organization change and failure. Administrative Science Quarterly, 38, 51–73. doi:10.2307/2393254 Argyris, C. (1982). Reasoning learning and action: Individual and organizational. Jossey Bass. Argyris, C. (1991). Teaching smart people how to learn. Harvard Business Review. Baron, J. N., Burton, D. M., & Hannan, M. T. (1996). The road taken: Origins and evolution of employment. Booth, W. C., Williams, J. M., & Colomb, G. G. (2003). The craft of research. University of Chicago. Bordens, K. S. (2005). Research design methods, 6th edition. Bruch & Abbot, TMH. Coulsin-Thomas, C. (1996). Business process reengineering: Myth and reality. London: Kogna Page. Courtney, N. (1996). BPR sources and uses. In C. Coulsin-Thomas (Ed.), Business process engineering: Myth and reality (pp. 226-250). London: Kogna Page. Davenport, T. H. (1996). Process innovation: Reengineering work through information technology.
1194
Garwin, D. A. (1993). Building a learning organization. Harvard Management Review, 88-91.
Hammer, M., & Champy, J. (1993). Reengineering the corporation: A manifesto for business revolution. New York: Harper Collins. Hammer, M., & Stanton, S. A. (1995). The reengineering revolution handbook. London: Harper Collins. Leedy, P. D., & Ormrod, J. E. (2004). Practical research: Planning and design. PHI. Linden, R. (1984). Seamless government: A practical guide to reengineering in public sector. Lyons, P. (1999). Assessment techniques to enhance organization learning. Opinion papers. Pareek, U. (2002). Training instruments in HRD & OD, 2nd edition. TMH. Ruma, S. (1974). A diagnostic model for organizational change, social change (pp. 3-5). Seinge, P. (1991). An interview with Peter Seinge: Learning organizations made plain. Training and Development. Senge, P. (n.d.). The fifth discipline: The art and practice of learning organizations. Doubleday/ Currency. Shajahan, S. (2005). Research methods for management, 3rd edition. Jaico Publishing House.
Optimization of Enterprise Information System
Vaughan, D. (1997). The trickle down effect: Policy decisions, risky work, and the challenge tragedy. California Management Review, 39(2), 80–102. This work was previously published in Always-On Enterprise Information Systems for Business Continuance: Technologies for Reliable and Scalable Operations, edited by Nijaz Bajgoric, pp. 78-90, copyright 2010 by Information Science Reference (an imprint of IGI Global).
1195
1196
Chapter 5.2
Authority and Its Implementation in Enterprise Information Systems Alexei Sharpanskykh Vrije Universiteit Amsterdam, The Netherlands
AbstrAct The concept of power is inherent in human organizations of any type. As power relations have important consequences for organizational viability and productivity, they should be explicitly represented in enterprise information systems (EISs). Although organization theory provides a rich and very diverse theoretical basis on organizational power, still most of the definitions for power-related concepts are too abstract, often vague and ambiguous to be directly implemented in EISs. To create a bridge between informal organization theories and automated EISs, this article proposes a formal logic-based specification language for representing power (in particular authority) relations. The use of the language is illustrated by considering authority structures of
organizations of different types. Moreover, the article demonstrates how the formalized authority relations can be integrated into an EIS.
INtrODUctION The concept of power is inherent in human organizations of any type. Power relations that exist in an organization have a significant impact on its viability and productivity. Although the notion of power is often discussed in the literature in social studies (Gulick &Urwick, 1937; Parsons, 1947; Friedrich, 1958; Blau & Scott, 1962; Peabody, 1964; Hickson et al., 1971; Bacharach & Aiken, 1977; Clegg, 1989), it is only rarely defined precisely. In particular, power-related terms (e.g., control, authority, influence) are often used
Copyright © 2011, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Authority and Its Implementation in Enterprise Information Systems
interchangeably in this literature. Furthermore, the treatment of power in different streams of sociology differs significantly. One of the first definitions for power in the modern sociology was given by Max Weber (1958): Power is the probability that a person can carry out his or her own will despite resistance. Weber and his followers (Dahl, Polsby) considered power as an inherently coercive force that implied involuntary submission and ignored the relational aspect of power. Other sociologists (Bierstedt, Blau) considered power as a force or the ability to apply sanctions (Blau & Scott, 1962). Such view was also criticized as restrictive, as it did not pay attention to indirect sources and implications of power (e.g., informal influence in decision making) and subordinate’s acceptance of power. Parsons (1947) considered power as “a specific mechanism to bring about changes in the action of organizational actors in the process of social interaction.” Most contemporary organization theories explore both formal (normative, prescribed) and informal (subjective, human-oriented) aspects of power (Peabody, 1964; Clegg, 1989; Scott, 2001). Formal power relations are documented in many modern organizations and, therefore, can be explicitly represented in models on which enterprise information systems (EISs) are based. The representation of formal power in EISs has a number of advantages. First, it allows a clear definition of rights and responsibilities for organizational roles (actors) and a power structure. Second, based on the role specifications, corresponding permissions for information, resources and actions can be specified for each role. Third, explicitly defined rules on power enable the identification of violations of organizational policies and regulations. Fourth, data about power-related actions (e.g., empowerment, authorization) can be stored in an EIS for the subsequent analysis. For modeling of power relations, the rich theoretical basis from social science can be used. Notably, many modern EISs implement no or very simplified representations of power relations and
mechanisms. In particular, the architecture ARIS (Scheer & Nuettgens, 2000) used for development of EISs identifies responsibility and managerial authority relations on organizational roles, however, does not provide general mechanisms for representing such relations and does not address change of these relations over time. The enterprise architecture CIMOSA (1993) distinguishes responsibilities and authorities on enterprise objects, agents, and processes/activities. However, no precise meaning (semantics) is attached to these concepts, which may be interpreted differently in different applications. Also, different aspects of authorities are not distinguished both in ARIS and in CIMOSA (e.g., authority for execution, authority for supervision, authority for monitoring). Often EISs realize extensive access schemata that determine allowed actions for roles and modes of access of roles to information (Bernus, Nemes, & Schmidt, 2003). Normally, such schemata are based on power relations established in organizations. Thus, to ensure consistency, unambiguousness and completeness of EISs’ access schemata, organizational power relations should be precisely identified and specified using some (formal) language. To this end, theoretical findings on organization power from social science are useful to consider. However, there is an obstacle to the direct implementation of this knowledge in EISs—the absence of operational definitions of power-related concepts in social theories. The first step to make the concept of power operational is to provide a clear and unambiguous meaning for it (or for its specific aspects). In this article, this is done by identifying the most essential characteristics and mechanisms of power described in different approaches and by integrating them into two broad categories: formal power (or authority) and informal power (or influence), which are described in the Power, Authority and Influence section. Further, this article focuses on the formal representation of authority, for which a formal language is described in the Authority: A Formal Approach section. Moreover, this section
1197
Authority and Its Implementation in Enterprise Information Systems
illustrates how the introduced formal language can be used to model authority systems of different types of organizations. The next section discusses the integration of formal authority relations into an automated EIS. Finally, the article concludes with a Discussion section.
POWEr, AUtHOrIty AND INFLUENcE As in many contemporary social theories (Peabody, 1964; Clegg, 1989), we assume that power can be practiced in an organization either through (formal) authority or through (informal) influence relations. Authority represents formal, legitimate organizational power by means of which a regulated normative relationship between a superior and a subordinate is established. Usually, authority is attached to positions in organizations. For example, authority of some managerial positions provides power to hire or to fire; to promote or to demote; to grant incentive rewards or to impose sanctions. In many approaches, it is assumed that authority implies involuntary obedience from subordinates. Indeed, as authority has a normative basis that comprises formal, explicitly documented rules, it is expected that subordinates, hired by the organization, should be aware of and respect these rules, which implies the voluntary acceptance of authority. All manifestations of power that cannot be explained from the position of authority fall into the category of influence. In contrast to authority, influence does not have a formal basis. It is often persuasive and implies voluntary submission. Some of the bases of influence are technical knowledge, skills, competences and other characteristics of particular individuals. Influence is often exercised through mechanisms of leadership; however, possession of certain knowledge or access to some resources, as well as different types of manipulation may also create influence. Influence may be realized in efforts to affect organizational decisions indirectly. 1198
Although authority and influence often stem from different sources, they are often interrelated in organizations. For example, the probability of the successful satisfaction of organizational goals increases, when a strong leader (meaning a leader that has a great value of influence) occupies a superior position of authority. Furthermore, sometimes patterns of influence that frequently occur in an organization may become institutionalized (i.e., may become authority relations). Modeling methods for authority and influence are essentially different. While authority relations are often prescriptive and explicitly defined, influence relations are not strictly specified and may vary to a great extent. Therefore, whereas authority relations can be generally represented in EISs, the specification of influence relations is dependant on particular (cognitive) models of agents that represent organizational actors. Relations between authority and influence can be studied by performing simulation with different types of agents situated in different organizational environments. The focus of this article is on modeling of formal authority relations. Influence relations and relations between authority and influence will be considered elsewhere.
AUtHOrIty: A FOrMAL APPrOAcH First, a formal language for specifying authorityrelated concepts and relations is introduced. The next section discusses how the introduced language can be used for representing authority structures of organizations of different types.
A Formal Language Simon (1957) describes three contributions of authority for an organization: (1) the enforcement of responsibility, (2) the specialization of decision making, and (3) the coordination of activity. Based on this and other theoretical findings that describe
Authority and Its Implementation in Enterprise Information Systems
All types of decisions with respect to a particular process can be divided into two broad groups: technological and managerial decisions (inspired by Bacharach and Aiken [1977]). Technological decisions concern technical questions related to the process content and are usually made by technical professionals. Managerial decisions concern general organizational issues related to the process (e.g., the allocation of employees, process scheduling, the establishment of performance standards, provision of resources, presenting incentives and sanctions). Managers of different levels (i.e., from the lowest level line managers to strategic apex [top] managers) may be authorized for making different types of managerial decisions varying from in scope, significance and detail. A particular decision type is specified as an aspect in the is_authorized_for relation. The same holds for technological decisions. Whereas consulting has a form of recommendation and implies voluntary acceptance of advices, decisions imposed on a role(s) that execute(s) the process are considered as imperatives with corresponding implications. Authorization for execution implies that a role is allowed to execute the process according
power, duties and responsibilities of organizational positions (Mintzberg, 1979), a number of relations for the specification of formal authority can be identified. These relations are defined on positions (or roles), without considering particular agents (individuals). The relations are formalized using the order sorted-predicate language (Manzano, 1996) and are presented graphically in Figure 1. We represent all activities of an organization (including decision making and personnel-related activities) by processes. Each organizational role is associated with one or more process. Roles may have different rights and responsibilities with respect to different aspects of the process execution. Furthermore, often several roles may potentially execute or manage certain processes. This is represented by the relation is_authorized_ for: r: ROLE x aspect: ASPECT x a: PROCESS, where aspect has one of the values {execution, monitoring, consulting, tech_des (making technological decisions), manage_des (making managerial decisions), user_defined_aspect}.
Figure 1. Graphical representation of the concepts and relations of the language used for specifying formal authority relations is_ a u th o rize d _ fo r
is_ re p o n s ib le _ fo r
a u th o rize s _ fo r/ d isa llo w s
b in a ry re la tio n n -a ry re la tio n ...
in te ra cts_ w ith
c o n c e pt
0 :n
0 :n
0 :n
aspect 1 :n
1 :n
agent
is_ a llo c a te d _ to
0 :n
assigns_ re sp o n s ib ility_ to _ fo r w ishes
0 :n
1 :n
1 :n
has_contro l_ o v e r is_ in s ta n c e _ o f
is_ s u b ord in a te _ o f_ fo r
ta sk
1 :n
1 :n
goal
0 :n
1 :1
is_ c om m itte d _ to
1 :n
p ro c e ss
subro le _ o f
role
0 :n
...
is_ re a lizable _ b y
uses
re source
0 :n
1 :n
0 :n
1 :n
m e a s u re s
1 :n
is_exp re ssed_ove r
1 :n
PI
1 :n
1199
Authority and Its Implementation in Enterprise Information Systems
to existing standards and guidelines. Whenever a problem, a question or a deviation from the standard procedures occurs, the role must report about it to the role(s) authorized for making technological/managerial (depending on the problem type) decisions and must execute the decision(s) that will follow. Monitoring implies passive observation of (certain aspects of) process execution, without intervention. Notice that other aspects of process execution described in the managerial literature (e.g., control, supervision) can be represented as a combination of already introduced aspects. In particular, control can be seen as the conjunction of monitoring and making technological and/or managerial decisions aspects; supervision can be defined as the combination of consulting and control. Furthermore, the designer is given the possibility to define his/her own aspects and to provide an interpretation to them. Although several roles in an organization may be authorized for a certain aspect related to some process, only one (or some) of them will be eventually (or are) responsible for this aspect. For example, the responsibility of a certain role with respect to the process execution means that the role is actually the one who will be performing the process and who holds accountability of the process execution. Furthermore, responsibility for the process execution implies allowance to use resources required for the process performance. The responsibility relation is specified as: is_responsible_ for: r:ROLE x aspect:ASPECT x a: PROCESS: process a is under responsibility of role r with respect to aspect (defined as for authorized_for). Some roles are authorized to make managerial decisions for authorizing/disallowing other roles for certain aspects with respect to process execution. The authorization/disallowance actions are specified by the following relations:
1200
authorizes_ for: r1: ROLE x r2: ROLE x aspect: ASPECT x a:PROCESS: role r1 gives the authority for aspect of process a to role r2 disallows: r1: ROLE x r2:ROLE x aspect: ASPECT x a: PROCESS: role r1 denies the authority for aspect of process a for role r2. However, to make a role actually responsible for a certain aspect of the process, another role besides the authority to make managerial decisions should also be the superior of the role with respect to the process. Superior-subordinate relations with respect to organizational processes are specified by: is_subordinate_of_for: r1: ROLE x r2: ROLE x a: PROCESS. Then, responsibility is assigned/ retracted using the following relations: assigns_responsibility_to_ for: r1: ROLE x r2: ROLE x aspect: ASPECT x a: PROCESS: role r1 assigns the responsibility for aspect of process a to role r2. retracts_responsibility_from_for: r1: ROLE x r2: ROLE x aspect: ASPECT x a: PROCESS: role r1 retracts responsibility from role r2 for aspect of process a. Using these relations, superiors may delegate/ retract (their) responsibilities for certain aspects of processes execution to/from their subordinates, and may restrict themselves only to control and making decisions in exceptional situations. In Hickson et al. (1971), control over resources is identified as an important source of power. Therefore, it is useful to identify explicitly which roles control resources by means of the relation has_control_over: r1: ROLE x res: RESOURCE. In the proposed modeling framework, the notion of resource includes both tangible (e.g., materials, tools, products) and abstract (information, data) entities. Our treatment of authority is different from both formal approaches that consider authority as an attribute or a property inherent in an organiza-
Authority and Its Implementation in Enterprise Information Systems
tion (Gulick & Urwick, 1937; Weber, 1958) and from the human-relation view that recognizes authority as an informal, non-rational and subjective relation (e.g., Follett, Mayo, cf. [Clegg, 1989]). As many representatives of converging approaches (e.g., C.I. Barnard, Simon [1957]), we distinguish between the formal authority prescribed by organizational policies and actual authority established between a superior and his/ her subordinate in the course of social interactions. In the latter case, a special accent lies on the acceptance of authority by a subordinate. In Clegg (1989), different cases of the authority acceptance are discussed: orders anticipated and carried out (anticipation); acceptance of orders without critical review; conscious questioning but compliance (acceptance of authority); discusses but works for changes; ignores, evades, modifies orders (modification and evasion); rejection of authority (appeals to co-workers or higher rank for support). Depending on the organizational type, varying administrative sanction may be applied in case an employee does not accept an authoritative communication, when he/she: (a) correctly understands/interprets this communication; (b) realizes that this communication complies with formal organizational documents and/or is in line with organizational goals; (c) is mentally and physically able to perform the required actions. In many modern organizations, rewards and sanctions form a part of authority relation, thus, explicitly defined: grants_reward_to_ for: r1: ROLE x r: REWARD x r2: ROLE x reason: STRING: role r1 grants reward r to role r2 for reason imposes_saction_on_for: r1: ROLE x s: SANCTION x r2: ROLE x reason: STRING: role r1 imposes sanction s to role r2 for reason. Sometimes authority relations may be defined with respect to particular time points or intervals (e.g., responsibility for some aspect of a process
may be provided for some time interval). To express temporal aspects of authority relations, the temporal trace language (TTL) (Jonker & Treur, 2003) is used. TTL allows specifying a temporal development of an organization by a trace. A trace is defined as a temporally ordered sequence of states. Each state corresponds to a particular time point and is characterized by a set of state properties that hold in this state. State properties are formalized in a standard predicate logic way (Manzano, 1996) using state ontologies. A state ontology defines a set of sorts or types (e.g., ROLE, RESOURCE), sorted constants, functions and predicates. States are related to state properties via the formally defined satisfaction relation |=: state(γ, t) |= p, which denotes that state property p holds in trace γ at time t. For example, state(γ1, t1) |= is_responsible_for(employee_A, execution, p1) denotes that in trace γ1 at time point t1 the employee_A is responsible for the execution of process p1. Dynamic properties are specified in TTL by relations between state properties. For example, the following property expresses the rule of a company’s policy that an employee is made responsible for making technological decisions with respect to process p1 after s/he have been executing this process for two years (730 days): ∀γ: TRACE ∀t1: TIME ∀empl: EMPLOYEE state(γ, t1) |= is_responsible_for(empl, execution, p1) & ∃t2: TIME state(γ, t2) |= assigns_responsibility_to_for(management, empl, execution, p1) & t1-t2 = 730 ⇒ state(γ, t1) |= assigns_responsibility_to_ for(management, empl, tech_des, p1). Other specific conditions (e.g., temporal, situational) under which authority relations may be created/maintained/dissolved are defined by executable rules expressed by logical formulae. The specification of these rules will be discussed in the Integration of Autority Relations Into an EIS section.
1201
Authority and Its Implementation in Enterprise Information Systems
Modeling Authority relations in Different types of Organizations Authority is enforced through the organizational structure and norms (or rules) that govern the organizational behavior. In general, no single authority system can be equally effective for all types of organizations in all times. An organizational authority system is contingent upon many organizational factors, among which organizational goals; the level of cohesiveness between different parts of an organization, the levels of complexity and of specialization of jobs, the level of formalization of organizational behavior, management style (a reward system, decision making and coordination mechanisms), the size of an organization and its units. Furthermore, the environment type (its uncertainty and dynamism; the amount of competitors), as well as the frequency and the type of interactions between an organization and the environment exert a significant influence upon an organizational authority structure. In the following it will be discussed how authority is realized in some types of (mostly industrial) organizations and how it can be modeled using relations introduced in the previous section. Authority in small firms of the early industrial era was completely exercised by their owners through mechanisms of direct personal control. Firm owners were managers and technical professionals at the same time, and, therefore, had authority and responsibility for all aspects related to processes, except for their execution, responsibility for which was assigned to hired workers. This can be expressed using the introduced formal language as follows: ∀p: PROCESS ∀t: TIME ∀γ: TRACE ∃empl: HIRED_EMPLOYEE state(γ, t) |= [ is_responsible_for(firm_owner, control, p) & is_ responsible_for(firm_owner, supervision, p) & is_responsible_for(empl, execution, p) ].
1202
The owners controlled all resources (∀r: RESOURCE ∀t: TIME ∀γ: TRACE state(γ, t) |= has_control_over(firm_owner, r)). Currently, similar types of organizations can be found in family business and small firms. With the growth of industry, which caused joining of small firms into larger enterprises, owners were forced to hire subcontractors, who took over some of their managerial functions. This can be modeled using the introduced language as assigning responsibility to subcontractors by the owner for some managerial and technological decisions, as well as monitoring and consulting of workers with respect to some processes execution. For example, the responsibility assignment to role subcontractor_A for making managerial and technological decisions related to the process p1 is expressed as: ∀γ: TRACE ∃t: TIME state(γ, t) |= [ assigns_responsibility_to_for(firm_owner, subcontractor_A, tech_des, p1) ∧ assigns_responsibility_to_for(firm_owner, subcontractor_A, manage_des, p1) ]. The owner reserved often the right to control for himself, which included granting rewards and imposing sanctions to/on subcontractors and workers, realized through superior-subordinate relations. For example, the following rule describes the superior-subordinate relations between the firm owner and subcontractor_A, responsible for making technological decisions related to process p1 and employee_A responsible for execution of process p1: ∀γ: TRACE ∀t: TIME state(γ, t) |= is_subordinate_of_for(subcontractor_A, firm_owner, p1) & is_subordinate_of_for(employee_B, firm_owner, p1). Organizational resources were usually controlled by the owner.
Authority and Its Implementation in Enterprise Information Systems
Large industrial enterprises of the 20th century are characterized by further increase in number of managerial positions structured hierarchically by superior-subordinate relations. Such organizations are often defined as mechanistic (Scott, 2001) and have the following typical characteristics: strong functional specialization, a high level of processes formalization, a hierarchical structure reinforced by a flow of information to the top of the hierarchy and by a flow of decisions/orders from the top. Responsibilities were clearly defined for every position in a hierarchy. In most organizations of this type, responsibility for execution was separated from responsibilities to make decisions. Managerial positions differed in power to make decisions depending on the level in the hierarchy. Often, technological decisions were made by managers of lower levels (or even by dedicated positions to which also execution responsibilities were assigned), whereas managerial decisions were made by managers at the apex. For example, the following formal expression identifies one of the upper managers responsible for making strategic decisions related to process p, one of the middle level managers responsible for making tactical decisions related to p and one of the first level managers responsible to making technological decisions related to p: ∃manager1: UPPER_MANAGER ∃manager2: MIDDLE_LEVEL_MANAGER ∃manager3: FIRST_LEVEL_MANAGER ∀γ: TRACE ∀t: TIME state(γ, t) |= [ is_responsible_for(manager1, making_strategic_decisions, p) ∧ is_responsible_for(manager2, making_tactical_decisions, p) ∧ is_responsible_for(manager3, tech_des, p) ]. In many of such organizations, managers at the apex shared responsibility for making (some) decisions with lower-level managers. Therefore, decisions that were usually proposed by lower level managers had to be approved by the apex managers. In connection to the previous example,
the following superior-subordinate relations can be identified: is_subordinate_of_for(manager2, manager1, p) & is_subordinate_of_for(manager3, manager2, p). Initially, such enterprises operated in relatively stable (however, sometimes complex) environmental conditions that reinforced their structure. However, later in the second half of the 20th century, to survive and to achieve goals in the changed environmental conditions (e.g., a decreased amount of external resources; increased competition; diversification of markets), enterprises and firms were forced to change their organizational structure and behavior. In response to the increased diversity of markets, within some enterprises specialized, market-oriented departments were formed. Such departments had much of autonomy within organizations. It was achieved by assigning to them the responsibility for most aspects related to processes, which created products/services demanded by the market. Although department heads still were subordinates of (apex) manager(s) of the organization, in most cases the latter one(s) were restricted only to general performance control over departments. Often departments controlled organizational resources necessary for the production and had the structure of hierarchical mechanistic type. Although a hierarchical structure proved to be useful for coordination of activities of organizations situated in stable environments, it could cause significant inefficiencies and delays in organizations situated in dynamic, unpredictable environmental conditions. Furthermore, the formalization and excessive control over some (e.g., creative and innovative) organizational activities often can have negative effects on productivity. Nowadays, large enterprises often create project teams or task forces that are given complex, usually innovative and creative tasks without detailed descriptions/prescriptions. As in the case with departments, teams are often assigned the responsibility to make technological and (some) managerial decisions and are given necessary
1203
Authority and Its Implementation in Enterprise Information Systems
resources to perform their tasks. For example, the following formal expression represents the responsibility assignment to the team_A for making technological and strategic managerial decisions related to the process of development of a design for a new product: ∀γ: TRACE ∃t: TIME state(γ, t) |= [ assigns_responsibility_to_for(management, team_A, tech_des, develop_design_new_ product_A) ∧ assigns_responsibility_to_for (management, team_A, strategic_managerial_des, develop_design_new_product_A) ]. Usually teams have highly cohesive plain structures with participants selected from different organizational departments based on knowledge, skills and experience required for the processes assigned to these teams. Although many teams implement informal communication and participative decision making principles (Lansley, Sadler, & Webb, 1975), also formal authority relations can be found in teams. In particular, in some project teams superior-subordinate relations exist between the team manager and team members. In this case, whereas responsibility for making technological decisions is given to team members, the responsibility for most managerial decisions is assigned to the team manager. Then, the members of such teams, being also members of some functional departments or groups, have at least two superiors. In other teams the team manager plays the integrator role and does not have formal authority over team members. In this case, the responsibility for decisions made by a team lies on all members of the team. Sometimes to strengthen the position of a team manager, s/he is given control over some resources (e.g., budgets) that can be used, for example, to provide material incentives to the team members. The principles on which teams are built come close to the characteristics of the organic organizational form (Scott, 2001). Some of such organizations do not have any formal authority structure,
1204
other allow much flexibility in defining authority relations between roles. In the former case formal authority is replaced by socially created informal rules. In the latter case, authority may be temporally provided to the role that has the most relevant knowledge and experience for current organizational tasks. In many organic organizations formal control and monitoring are replaced by informal mutual control and audit. For the investigation of dynamics of organic organization, informal aspects such as influence, leaderships, mental models of employees are highly relevant, which will be discussed elsewhere. Often interactions between organic organizations (e.g., of network type) are regulated by contracts. Usually, contracts specify legal relationships between parties that explicitly define their rights and responsibilities with respect to some processes (e.g., production, supply services). Several organizations may be involved in the process execution (e.g., supply chains for product delivery); therefore, it is needed to identify particular aspects of responsibility in contracts for such processes. The introduced language may be used for specifying such responsibilities and their legal consequences through reward/sanctions mechanisms.
INtEGrAtION OF AUtHOrIty rELAtIONs INtO AN EIs In our previous work, a general framework for formal organizational modeling and analysis is introduced (Popova & Sharpanskykh, 2007c). It comprises several perspectives (or views) on organizations, similar to the ones defined in the Generalized Enterprise Reference Architecture and Methodology (GERAM) (Bernus, Nemes, & Schmidt, 2003), which forms a basis for comparison of the existing architectures and serves as a template for the development of new architectures. In particular, the performance-oriented view (Popova & Sharpanskykh, 2007b) describes organizational goal structures, performance indi-
Authority and Its Implementation in Enterprise Information Systems
cators structures, and relations between them. The process-oriented view (Popova & Sharpanskykh, 2007a) describes task and resource structures, and dynamic flows of control. In the agent-oriented view different types of agents with their capabilities are identified and principles for allocating agents to roles are formulated. Concepts and relations within every view are formally described using dedicated formal predicate-based languages. The views are related to each other by means of sets of common concepts. The developed framework constitutes a formal basis for an automated EIS. To incorporate the authority relations introduced in this article into this framework, both syntactic and semantic integration should be performed. The syntactic integration is straightforward as the authority relations are expressed using the same formal basis (sorted predicate logic) as the framework. Furthermore, the authority relations are specified on the concepts defined in the framework (e.g., tasks, processes, resources, performance indicators). For the semantic integration rules (or axioms) that attach meaning, define integrity and other types of organization constraints on the authority relations should be specified. A language for these rules is required to be (1) based on the sorted predicate logic; (2) expressive enough to represent all aspects of the authority relations; (3) executable, to make constraints (axioms) operational. Furthermore, as authority relations are closely related to dynamic flows of control that describe a temporal ordering of processes, a temporal allocation of resources, and so forth, a language should be temporally expressive. A language that satisfies all these requirements is the temporal trace language (TTL). In Sharpanskykh and Treur (2006), it is shown that any TTL formula can be automatically translated into executable format that can be implemented in most commonly used programming languages. In the following, the semantic integration rules and several examples of constraints defined for particular organizations are considered.
The first axiom on the authority relations expresses that roles that are responsible for a certain aspect related to some process should be necessarily authorized for this: Ax1: ∀r ROLE ∀a: PROCESS ∀aspect: ASPECT ∀γ: TRACE ∀t: TIME state(γ, t) |= [ responsible_for(r, aspect, a) ⇒ authorized_for(r, aspect, a) ]. Another axiom expresses the transitivity of the is_subordinate_of_for relation: r1: ROLE x r2: ROLE x a: PROCESS: Ax2: ∀r1, r2, r3: ROLE ∀a: PROCESS ∀γ, t state(γ, t) |= [ is_subordinate_of_for(r2, r1, a) ∧ is_subordinate_of_for(r3, r2, a)] ⇒ is_subordinate_of_for(r3, r1, a)] One more axiom (Ax3) that relates the interaction (communication) structure of an organization with its authority structure based on superiorsubordinate relations expresses that there should be specified a communication path between each superior role and his/her subordinate(s). Such a path may include intermediate roles from the authority hierarchy and may consist of both interaction and inter-level links. The following axiom expresses that only roles that have the responsibility to make managerial decision with respect to some process are allowed to authorize other roles for some aspect of this process: Ax4: ∀r1,r2:ROLE ∀a: PROCESS ∀asp: ASPECT ∀γ, t state(γ, t) |= [ authorizes_for(r1, r2, asp, a) ⇒ is_responsible_for(r1, manage_des, a) ]. In general, rules that describe processes of authorization, assigning/retracting of responsibilities may have many specific conditions. However, to assign responsibility for some aspect of a process a role should necessarily have at least the
1205
Authority and Its Implementation in Enterprise Information Systems
responsibility to make managerial decisions and be the superior (with respect to this process) of a role, to which the responsibility is assigned. All other conditions may be optionally specified by the designer. Responsibility may be assigned on a temporal basis. To specify that a responsibility relation holds in all states that correspond to time points in the time interval limit, a responsibility persistency rule should be defined: C1: ∀asp: ASPECT ∀r1,r2:ROLE ∀a: PROCESS ∀γ, ∀t1, t2:TIME state(γ, t1) |= is_responsible_for(r1, asp, a) & state(γ, t2) |= assigns_responsibility_to_for(r1, r2, asp, a) & (t1-t2) < limit ⇒ state(γ, t1+1) |= is_responsible_for(r1, asp, a). Using concepts and relations from other organizational views, more complex constraints related to formal authority can be described. For example, “the total amount of working hours for role r1 should be less than a certain limit”: C2: sum([a: PROCESS], case(∃t1 state(γ, t1) |= is_responsible_for(r1, execution, a), a.max_duration, 0)) < limit. This property can be automatically verified every time when roles are assigned additional responsibilities for some processes. This is particularly useful in matrix organizations (Scott, 2001), in which roles often combine functions related to different organizational formations (departments, teams), and, as a result, their actual workload may not be directly visible. Another constraint expresses that when the execution of a process begins, for each of the basic aspects for this process (execution, tech_des, and manage_des) a responsible role should be assigned: C3: ∀a: PROCESS ∀γ, t state(γ, t) |= process_ started(a)
1206
⇒ ∃r1,r2,r3: ROLE state(γ, t) |= [ is_responsible_ for(r1, manage_des, a) ∧ is_responsible_ for(r2, tech_des, a) ∧ is_responsible_for(r3, execution, a) ]. Another example is related to rewards/sanctions imposed on a role depending on the process execution results. As shown in Popova and Sharpanskykh (2007b), performance indicators (PIs) may be associated with organizational processes that represent performance measures of some aspects of the tasks execution. Depending on the PIs values, a company may have regulations to provide/impose some rewards/sanctions for roles (agents) responsible for the corresponding processes. Although such rules are rarely completely automated, still an EIS may signal to managers about situations, in which some rewards/sanctions can be applied. For example, the system may detect and propose a reward granting action to the manager, when a role has been keeping the values of some PI(s) related to its process above a certain threshold for some time period [period_start, period_end]. In TTL: C4: ∀γ, t1 t1 ≥ perod_start & t1 ≤ perod_end & state(γ, t1) |= [ is_responsible_for(r2, execution, a1) ∧ measures(PI1, a1) ∧ is_subordinate_of_for(r2, r1, a1) ∧ PI1.value > limit ] ⇒ state(γ, period_end+1) |= grants_reward_to_ for(r1, bonus_5_procent, r2, excellent_performance_of_a1). The axioms Ax1-Ax4 can be checked on a specification of organizational formal authority relations. To this end, simple verification algorithms have been implemented. Whereas the constraints C1-C4 and similar to them need to be checked on actual executions of organizational scenarios (e.g., traces obtained from an EIS). An automated method that enables such types of analysis is described in (Popova & Sharpanskykh, 2007a).
Authority and Its Implementation in Enterprise Information Systems
Furthermore, the identified rules can be used to determine for each user of an EIS relevant to him/her information and a set of allowed actions that are in line with his/her (current) responsibilities defined in the system. Moreover, (possible) outcomes of each action of the user can be evaluated on a set of (interdependent) authority-related and other organizational constraints, and based on this evaluation the action is either allowed or prohibited.
DIscUssION This article makes the first step towards defining the formal operational semantics for power-related concepts (such as authority, influence, control), which are usually vaguely described in organization theory. In particular, this article addresses formal authority, different aspects of which are made operational by defining a dedicated predicate logic-based language. It is illustrated how the introduced relations can be used for representing authority structures of organizations of different types. Modern enterprises can be described along different dimensions/views, that is, human-oriented, process-oriented and technology-oriented. However, most of the existing EISs focus particularly on the process-oriented view. An extension of the models on which EISs are built with concepts and relations defined within the human-oriented view allows conceptualizing more static and dynamic aspects of organizational reality, thus, resulting in more feasible enterprise models. Among the relations between human actors, authority deserves special attention, as it is formally regulated and may exert a (significant) influence on the execution of enterprise processes. This article illustrates how the concepts and relations of authority can be formally related to other organizational views, thus resulting into an expressive and versatile enterprise model. The introduced authority relations may be also incorporated into other existing
enterprise architectures that comply with the requirements of the GERAM (e.g., CIMOSA) based on which modern EISs are built. However, to enable semantic integration of the authority concepts, an EIS is required to have formal foundations, which are missing in many existing enterprise architectures and systems. In the future it will be investigated how the proposed authority modeling framework can be applied for the development of automated support for a separation task (i.e., maintaining a safe distance between aircrafts in flight) in the area of air traffic control. Originally this task was managed by land controllers, who provided separation instructions for pilots. With the increase of air traffic, the workload of controllers rose also. To facilitate the controllers’ work, it was proposed to (partially) delegate the separation task to pilots. This proposal found supporters and opponents both among controllers and pilots. The resistance to a large extent was (is) caused by ambiguity and vagueness of issues related to power mechanisms. Such questions as “whom to blame when an incident/accident occurs?”, “which part of the task may be delegated?”, “under which environmental conditions the task can be delegated?” still remain open. By applying the framework proposed in this article, one can precisely define responsibilities of both controllers and pilots and conditions under which the responsibility can be assigned/ retracted. Notice that these conditions may include relations from different views on organizations (e.g., “current workload is less than x,” “has ability a”), which allows a great expressive power in defining constraints.
rEFErENcEs Bacharach, S., & Aiken, M. (1977). Communication in administrative bureaucracies. Academy of Management Journal, 18, 365-377.
1207
Authority and Its Implementation in Enterprise Information Systems
Bernus, P., Nemes, L., & Schmidt, G. (2003). Handbook on enterprise architecture. Berlin: Springer-Verlag. Blau, P., & Scott, W. (1962). Formal organizations. Chandler Publishing. CIMOSA. (1993). CIMOSA—open system architecture for CIM. ESPRIT consortium AMICE. Berlin: Springer-Verlag. Clegg, S. (1989). Frameworks of power. London: Sage. Friedrich, C. (Ed.). (1958). Authority. Cambridge, MA: Harvard University Press. Gulick, L., & Urwick, L. (Eds.). (1937). Papers on the science of administration. New York, NY: Institute of Public Administration. Hickson, D., Hinings, C., Lee, C., Schneck, R., & Pennings, J. (1971). A strategic contingency theory of intra-organizational power. Administrative Science Quarterly, 16, 216-229. Jonker, C., & Treur, J. (2003). A temporal-interactivist perspective on the dynamics of mental states. Cognitive Systems Research Journal, 4, 137-155. Lansley, P., Sadler, P., & Webb, T. (1975). Organization structure, management style and company performance. London: Omega. Manzano, M. (1996). Extensions of first order logic. Cambridge, UK: Cambridge University Press. Mintzberg, H. (1979). The structuring of organizations. Englewood Cliffs. NJ: Prentice Hall. Parsons, T. (1947). The institutionalization of authority. In: M. Weber, The theory of social and economic organization. New York, NY: Oxford University Press.
Peabody, R. (1964). Organizational authority: Superior-subordinate relationships in three public service organizations. New York, NY: Atherton Press. Popova, V., & Sharpanskykh, A. (2007a). Processoriented organization modeling and analysis. In: J. Augusto, J. Barjis, U. Ultes-Nitsche (Eds.), Proceedings of the 5th International Workshop on Modelling, Simulation, Verification and Validation of Enterprise Information Systems (MSVVEIS 2007)(pp. 114-126). INSTICC Press. Popova, V., & Sharpanskykh, A. (2007b). Modelling organizational performance indicators. In: F. Barros, et al. (Eds.), Proceedings of the International Modeling and Simulation Multi-conference IMSM’07 (pp. 165-170). SCS Press. Popova, V., & Sharpanskykh, A. (2007c). A formal framework for modeling and analysis of organizations. In: J. Ralyte, S. Brinkkemper, B. Henderson-Sellers (Eds.), Proceedings of the Situational Method Engineering Conference, ME’07 (pp. 343-359). Berlin: Springer-Verlag. Scheer, A.-W., & Nuettgens. M. (2000). ARIS architecture and reference models for business process management. In: W. van der Aalst, et al. (Eds.), LNCS 1806, Berlin, 366-389 Scott, W. (2001). Institutions and organizations. Thousand Oaks, CA: Sage Publications. Sharpanskykh, A., & Treur, J. (2006) Verifying inter-level relations within multi-agent systems. Proceedings of the 17th European Conference on AI, ECAI’06 (pp. 290-294). IOS Press. Simon, H. (1957). Administrative behavior (2nd ed.). New York, NY: Macmillan Co. Weber, M. (1958). From Max Weber: Essays in sociology. In: H. Gerth, & C. Mills (Eds.). New York, NY: Oxford University Press.
This work was previously published in Always-On Enterprise Information Systems for Business Continuance: Technologies for Reliable and Scalable Operations, edited by Nijaz Bajgoric, pp. 252-264, copyright 2010 by Information Science Reference (an imprint of IGI Global).
1208
1209
Chapter 5.3
Enterprise Systems, Control and Drift Ioannis Ignatiadis University of Bath, UK Joe Nandhakumar University of Warwick, UK
AbstrAct Enterprise Systems are widespread in current organizations and seen as integrating organizational procedures across functional divisions. An Enterprise System, once installed, seems to enable or constrain certain actions by users, which have an impact on organizational operations. Those actions may result in increased organizational control, or may lead to organizational drift. The processes that give rise to such outcomes are investigated in this chapter, which is based on a field study of five companies. By drawing on the theoretical concepts of human and machine agencies, as well as the embedding and disembedding of information in the system, this chapter argues DOI: 10.4018/978-1-60566-146-9.ch016
that control and drift arising from the use of an Enterprise System are outcomes of the processes of embedding and disembedding human actions, which are afforded (enabled or constrained) by the Enterprise System.
INtrODUctION Implementation of an Enterprise System (also known as Enterprise Resource Planning-ERP System) in an organization may have profound impact on organizational processes (Boudreau & Robey, 1999; Koch, 2001; Martin & Cheung, 2000; Schrnederjans & Kim, 2003; Siriginidi, 2000), as well as on information flow and transparency (Bernroider & Koch, 1999; Besson & Rowe, 2001; Gattiker & Goodhue, 2004; Legare,
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Enterprise Systems, Control and Drift
2002; Markus & Tanis, 2000; Newell et al., 2003; Shang & Seddon, 2000). Much of the research in Enterprise Systems however, is concerned with the implementation process and providing insights into success factors of Enterprise Systems implementation (e.g. Akkermans & van Helden, 2002; Al-Mashari & Al-Mudimigh, 2003; Bingi et al., 1999; Holland & Light, 1999; Hong & Kim, 2002; Nah et al., 2001; Shanks et al., 2000; Somers & Nelson, 2001). Only a few studies investigate issues relating to the post implementation of ES (e.g. Elmes et al., 2005). Hence we have limited understanding of issues affecting the use of Enterprise Systems in organizations and their potential for organizational impact. This chapter therefore concentrates on the actual use of an Enterprise System, post-implementation. It examines the impact of actions performed by humans (users), or a machine (the Enterprise System), on control and drift within an organization. We propose a theoretical conceptualisation to describe the impact of those actions by drawing on a field study of five companies that have an Enterprise Resource Planning System installed. The significance of this research is twofold. First, our conceptualisation developed in this chapter enhances the understanding of the processes that result in organizational control (or drift) through the use of an Enterprise System. Second, our results also pinpoint issues of practical interest to companies that are using (or thinking of installing) an Enterprise System. Although ERP systems were originally designed to be used within an organization, in the last years they have evolved considerably to include or link with external functionalities such as Customer Relationship Management (CRM), Supply Chain Management (SCM) and e-business (B2B and B2C). The current trend is also to repackage ERP systems as a collection of interoperable modules with standards-based interfaces, in accordance with the mandates of Service-Oriented Architectures. The examination of ERP systems in this chapter however only
1210
looked at internal operations, and the use of such systems referred only to internal actors, without examining external linkages, which was beyond the purposes of this research. The rest of the chapter is structured as follows: in the following section, we review the relevant literature on Information Systems, control and drift, as well as human agency, which are topics central to our research. We then present our theoretical foundations, in which we frame our analysis and discussion. Our research approach is then outlined, followed by a description of the companies that participated in this research. We follow this with an analysis of the data gathered from the companies, across the dimensions of control and drift. We then discuss our findings and present our conceptualisation of Enterprise System use, and conclude with some theoretical and practical implications of our research.
LItErAtUrE rEVIEW Enterprise systems, control and Drift The link of Information Systems with organizational control has been investigated by a variety of scholars in the field (e.g. Coombs et al., 1992; Duane & Finnegan, 2003; Malone, 1997; Tang et al., 2000). Many point to the paradox that while Information Systems can empower employees with increased decision-making capabilities, at the same time they can serve to increase control within the organization (e.g. Bloomfield & Coombs, 1992; Bloomfield et al., 1994; Orlikowski, 1991). Although control in a general information systems setting has been examined to a large extent, the number of studies in an Enterprise Systems setting in particular is still quite limited. What distinguishes Enterprise Systems from other Information Systems is their scale, complexity, and potential for organizational impact. Because of this, they deserve special attention with regards
Enterprise Systems, Control and Drift
to the issue of control. From the limited number of studies in this area, the characteristic ones are those by Hanseth et al. (2001), Sia et al. (2002), and Elmes et al. (2005). The main findings of these three studies are outlined below. Hanseth et al. (2001) claim that Enterprise Systems (such as ERP systems), with their emphasis on standardization, streamlining, and integrating business processes, are an ideal control technology. However, they point to a surprising result: That implementing an ERP system over a global organization in order to enhance control may as well have the opposite effect, i.e. reduce control. This can be explained with the ubiquitous nature of side effects. In that sense, the more integrated the system is, the faster and farther side effects have an impact, and the bigger their consequences. Sia et al. (2002) have examined the issues of empowerment and panoptic control of ERP systems. They summarize the panoptic control aspect of ERPs in three dimensions: comprehensive system tracking capability, enhanced visibility to management, and enhanced visibility to peers (through workflow dependency and data interdependency). The findings by Sia et al. (2002) indicate that although an ERP implementation has the potential for both employee empowerment and managerial control, managerial power seems to be perpetuated through an ERP implementation. Elmes et al. (2005) have identified two seemingly contradictory theoretical concepts in an Enterprise System: reflective conformity and panoptic empowerment. Reflective conformity refers to the way that the integrated nature of the Enterprise System leads to greater employee discipline, while at the same time requiring them to be reflective in order to achieve organizational benefits from the system. Panoptic empowerment describes the greater visibility of information, which is provided by the shared database of the Enterprise System. This empowers employees to do their work more effectively and efficiently, but at the same time makes their work in the system
more visible to others, who can then more easily exercise control over them. Regarding the issue of drift, Ciborra (2002) defines drift as the processes of matching between situated human interventions of use and open technology. Technology drifting can be the result of passive resistance, learning-by-doing, sabotage, radical shifts in conditions, or plain serendipity In addition, Ciborra (2000) mentions the case when control has to decrease, when it is associated with the power to bring to life sophisticated and evolving infrastructures. Although Ciborra concentrates mainly on technology drift, this chapter is concerned with drift at the organizational level, which is implied by decrease of organizational control. In particular regarding the issue of drift, van Fenema and van Baalen (2005) have looked into strategies for dealing with drift during the implementation of ERP systems. They distinguish between three such strategies, from which they argue the third strategy (drift containment) is the most realistic in ERP implementation projects: • •
•
Control strategy aims at eliminating drifting and risk. Incremental strategy considers drifting to be a normal part of technology implementations. In this case “bricolage” is used to adapt technology to its context. Drift containment recognizes the inevitable drifting in technology implementations and the fact that drifting may even contribute to the stabilisation of technology. The question is then how to balance control and drift and use drift as a source of stabilisation of technology projects.
Although the above studies looked (amongst others) at either the issue of control or drift (but not both at the same time), Nandhakumar et al. (2005) have looked into the contextual forces of an ERP implementation, and how those influence control and drift during the implementation process. These contextual forces were interrelated,
1211
Enterprise Systems, Control and Drift
and referred to the affordance of the technology, as well as the social structure, practices and norms, either within the organization or external. The analysis by Nandhakumar et al. (2005) was based on examining managers’ intentions, the power and cultural context within the organization, as well as the affordances of the technology. Control was seen by Nandhakumar et al. to be an outcome of managerial intentions regarding the trajectory of implementation of the ERP system. This depended on both system affordances and social structure. (Technological) drift was then seen by Nandhakumar et al. (2005) to occur from the “organizational members’ planned and unplanned actions in response to both previous technology and organizational properties they have enacted in the past” (p. 239). Unintended consequences of the implementation would then mean that the technology would drift from the planned implementation outcomes. Control and drift during the implementation of the ERP system in this case were interrelated, and would be operating in continuous cycles, in response to contextual forces shaping the actual implementation of the ERP system. Although Nandhakumar et al. directly acknowledge the influence of users in accepting or rejecting the ERP system, their study mainly examined managerial as opposed to user intentions, and in the implementation, as opposed to the use stage of an ERP system. This chapter therefore complements the viewpoint by Nandhakumar et al. by arguing that the way users use the system according to the affordances of the technology also has a large part to play on impacting control and drift within an organization.
Enterprise system Use and Human Agency One important strand of Enterprise System research which is also relevant to the current chapter is the linkage of Enterprise System (e.g. ERP) use with the agency of humans. Kallinikos
1212
(2004) for example argues that ERP systems help shape human agency and institute patterns of action and communication in organizations. They accomplish this by delineating the paths along which human agency should take place. This is afforded by the dissection of organizational activities in discreet terms, and the provision of the procedural sequences for the execution of particular tasks. In this sense, ERP systems are mostly concerned with streamlining, control and standardisation of organizational operations. In so doing, ERP systems enable the construction of accountable and governable patterns of behaviour in organizations. Similarly, Boudreau & Robey (2005) have pointed to the fact that when looking at organizational change arising from the use of IT, an agency perspective may mean limited possibilities for radical IT-induced change. An agency perspective of IT in this case takes the position that IT is socially constructed and open to a variety of social meanings and potential uses. Boudreau & Robey argue that certain technologies allow for a greater degree of human agency and others to a lesser degree. Their views agree with those of Orlikowski (2000), who acknowledges that while users can and do use technologies as they were designed, they also can and do circumvent the intended uses of technologies, either by ignoring certain properties, working around them, or inventing new ones. The research by Boudreau & Robey looked at ERP systems, which are seen as inflexible software packages constraining user-inspired action (human agency). Their results, however, indicate that although ERP systems are seen as rigid control mechanisms, there is still scope for human agency to take place within such systems, contradicting Kallinikos (2004). The research by Boudreau & Robey indicated that technical system constraints on human agency can be overcome through a process of initial inertia (rejection of the system), improvised learning as a result of pressure to use the system, and finally reinven-
Enterprise Systems, Control and Drift
tion of usages of the system, to match their own previous experiences and background.
tHEOrEtIcAL FOUNDAtIONs This section presents our theoretical foundations for the development of our conceptualisation in the chapter. These theoretical foundations are the concepts of embedding-disembedding and agencies (human and machine). These concepts were chosen as they were deemed important in explaining the processes that lead to organizational control and drift. The theoretical foundations presented below together with the concepts of control and drift that were described in the previous section, will inform our analysis and discussion.
Embedding-Disembedding Giddens (1990) defines disembedding as “the lifting out of social relations from local contexts of interaction and their restructuring across indefinite spans of time-space” (p. 21). Conversely, embedding (or reembedding) is according to Giddens, “the reappropriation or recasting of disembedded social relations so as to pin them down (however partially or transitorily) to local conditions of time and place” (pp. 79-80). For Giddens (1990) there are two types of disembedding mechanisms: symbolic tokens and expert systems (not expert systems in an information systems sense). Although Giddens concentrates mainly on money, symbolic tokens in general are media of exchange that can be circulated without regard to specific characteristics of the people or groups that handle them. Expert systems are then organizations of technical accomplishment or professional expertise that make a significant contribution to the material and social environment in which we live. In the current context of an Enterprise System, it can be argued that the expert system as defined by Giddens consists of the rules and procedures
that are inscribed in the Enterprise System. The symbolic token is then the information that is held and represented within the Enterprise System. The social relations that are being disembedded and reembedded derive from the actions of users in the system. These actions are disembedded from the local context where they are carried out, to time-space stretches of the company’s operations. Those actions are consequently reembedded, by impacting on, and being appropriated by, other users of the system.
Human and Machine Agency Giddens (1984) defines agency is the “capability to make a difference, that is to exercise some sort of power” (p. 139). In other words, agency is synonymous to the carrying out (or intentionally not carrying out) of an action. With regards to actions in an Information Systems setting, Rose et al. (2003) have questioned the relationship between the social and technical aspects of IS, in other words, how do social systems act upon technology, and vice versa. Rose & Jones (2004) have drawn on both Actor-Network Theory and Structuration Theory, as well as Pickering (1995), to develop a model called the “Double Dance of Agency”. In this model, the distinction is made between human agency and machine agency, but the two are interwoven and affect each other. Rose & Truex (2000) have proposed to give an understanding of machine agency as perceived autonomy. Machine agency is then the emergent property of the development process and becomes embedded in the completed machine. However, Nandhakumar et al. (2005) in their study of an ERP implementation, draw from Gibson (1979) and Norman (1988) to view machine agency as affordance. They also draw from Giddens (1984), who attributes intentionality to human agency. In the context of this chapter, we will similarly assume that machine agency is characterised by affordance, whereas human agency is character-
1213
Enterprise Systems, Control and Drift
ised by intentionality. The next section presents the research approach employed in our research.
rEsEArcH APPrOAcH Our research approach is interpretive case study (Walsham, 1993). In interpretivism, the reality is socially constructed by human agents (Walsham, 1995). Interpretive studies reject the notion of an objective or factual account of events and situations, seeking instead a relativistic, albeit shared, understanding of phenomena (Orlikowski & Baroudi, 1991). In the current research, five companies (NGlobe, TechniCom, SecSys, FCom and TransCom – all pseudonyms) have been examined with regards to the use of their Enterprise System. The research took place between October 2004 and August 2005, and involved formal semi-structured (face-to-face mostly with a few telephone) interviews with company staff (office employees and managers). It also involved informal conversations with them, as well as observing user interactions with the system. Interviews were mostly with one person, although a couple of them involved two persons, and lasted anything between 40 minutes and 2 hours, with an average of 1 hour per interview. All of the interviews (with the exception of one due to personal sensitivities) were recorded and transcribed verbatim. The table below shows the interviews with members of the 5 companies researched.(Table 1) Analysis was carried out using a qualitative data analysis approach proposed by Miles & Huberman (1994). They distinguish between three types of codes: Descriptive codes entail little interpretation, but attribute a class of the phenomena to a segment of text (such as interview transcript). The same segment could also be coded with an interpretive code, where the researcher uses some of his or her knowledge about the background of the phenomenon in order to inter-
1214
pret the segment of text. Thirdly, pattern codes are even more inferential and explanatory. In the current research, coding was done with the aid of the qualitative analysis software NVivo. Within NVivo, descriptive coding was done through a first pass analysis of the interview transcripts, where the relevant text was examined, and portions of it (sentences, paragraphs, or sections) were assigned a code according to the phenomenon that they were describing. Some pattern coding also occurred at this stage, as codes were grouped into categories at various levels that linked those codes together. This coding was more akin to a grounded theory approach (Glaser & Strauss, 1967), where codes evolved directly from the data. Memos were also kept in NVivo about issues of interest that emerged, and reflections from the part of the researcher on the data examined. This prompted a second pass of coding, where the coding was more theory-driven, according to identified themes in the first pass, and literature that could support the concepts from the first pass of coding (e.g. relevant literature on control, drift, human action, machine action, intentionality, affordance, embedding, disembedding). The second pass of coding was more interpretive, in that segments of text were interpreted according to the chosen literature. However, data were not forced into predefined categories, as codes emerged not only from the literature, but also as a result of the first pass coding, which was grounded on the interview data. In addition, pattern coding also occurred at this stage, as several codes were grouped into higher-order patterns. The analysis below is structured in terms of the two concepts of control and drift, also taking into account human and machine actions, and the respective intentionalities and affordances. Indicative quotes are also presented. Discussion of the results then incorporates the concepts of embedding and disembedding. The next section briefly describes the background of the companies participating in the research, followed by the analysis and discussion of our results.
Enterprise Systems, Control and Drift
Table 1. Illustration of the interviews carried out Area
Positions Interviewed
No. of Interviews
Company: NGlobe IS Department
- Business Centre IS Manager - General IS Manager
2
Finance Department
- Accounts Payable Employee - Cash Management and Treasury Process Director - Record to Report Process Director
3
Company: TechniCom Procurement
- Supply Chain Manager 1 - Supply Chain Manager 2
Procurement
- Purchasing Manager
2
Company: SecSys 2
Company: FCom IT Department
- IT Manager
2
Company: TransCom Sales
- Sales Facilitator - Commercial Assistant
3
Finance
- Billing Clerk - Accounting Reports Manager - Accounts Payable Clerk - Assistant Accountant - Assistant Finance Manager
6
Service Management
- Maintenance Policy Leader - Head of Production - SAP Facilitator - Production Planner - Shift Planning Coordinator - Flow Repairable Controller - Reliability Group Leader - Abnormal Work Manager
12
Warehouse and Distribution
- Business Improvement Coordinator - Logistics Director - Inventory Planner
5
Materials Management
- Materials Controller 1 - Materials Controller 2 - Materials Planner
5
Purchasing
- Purchasing Manager
1
IT Management
- IT Manager - Global Information Systems Director
4
Total Interviews
cOMPANy DEscrIPtIONs The companies that participated in the research are described below (with pseudonyms). All of the companies have had an Enterprise System (ERP) installed for a minimum of 2 years. This
47
was a criterion when selecting those companies, in order to be able to observe the impacts of the Enterprise System after it had been used for a number of years, and not during or immediately after implementation. The companies come from different sectors, have different customer markets,
1215
Enterprise Systems, Control and Drift
and use different ERP systems, which adds to the generalizability of the results.
transcom TransCom operates in the transport sector, and has 5 billion Euros in annual sales worldwide. TransCom is using the SAP R/3 ERP system, currently on version 4.5 in most places where it is installed, although there are some later versions (4.6) or earlier versions (e.g. 3.1, or even R/2) in some countries. The system was fully installed in January 2002 in the UK. In addition to the UK, SAP is currently fully installed in Spain, France, Sweden, Romania, Chile, and the USA. The modules of SAP used are Materials Management, Service Management, Finance, Sales & Distribution.
NGlobe NGlobe has a presence in almost 200 countries, and generates most of its revenue by selling financial information. NGlobe has the Oracle 11i ERP system installed, used for the modules of Finance (accounts receivable, payable, general ledger, fixed assets, and purchasing) and HR. In addition, the Oracle’s learning module is also installed. The design of the ERP system took place between 2000-2001, and NGlobe first went live in 2001, with the whole implementation finishing by the end of 2003. NGlobe was one of the first implementations of one global instance of Oracle. This was enabled by the fact that NGlobe is a very homogenous company.
technicom TechniCom is a company in the technological sector. It has recently been restructured, and over a 4-year period TechniCom has bought 32 companies, all with different bespoke systems. TechniCom is using Oracle version 11 for the financials part, mainly for accounts payable and accounts receivable. For their HR function Tech-
1216
niCom is using the PeopleSoft ERP system. For billing they have had a bespoke system built for them, which harmonizes all the different billing systems from all the companies that TechniCom bought.
Fcom FCom is a vertical retail manufacturer, which means that the bulk of the products that they sell in the stores, they also manufacture themselves. It operates more like a family-run business, rather than a large corporate public company. FCom is a £55 million turnover company operating in the UK, currently having 61 stores and employing around 500 people. FCom has the SAP R/3 system installed. The modules used are Payroll, Finance, Manufacturing, and Sales and Distribution.
secsys SecSys is owned by an American company, and operates in the UK under the service support sector. There are around 25 offices of SecSys in the UK. SecSys has the SAP (R/3, version 3.1h) ERP system, which was installed in November 1998 in order to overcome the millennium issue. The modules of SAP that are currently used are Finance, Logistics, Materials Management, Inventory Management, Purchasing, Sales and Distribution. The company’s intention within the next 2 years is to update their ERP system.
cAsE EVIDENcE AND ANALysIs The field study data from the companies were analyzed based on the categories of control and drift, which are central to our research. The analysis also drew on the notions of the intentionality of human agency (users), and the affordances of the machine agency of the Enterprise System. Human agency in the analysis below is implied by words such as ”intention” and ”intentionality”.
Enterprise Systems, Control and Drift
Machine agency is implied by words such as ”afford”, ”enable”, ”constrain”. Only data from the companies that are relevant to the main theme of this research are presented here. Before examining the actual use of the Enterprise System, our research data indicated that there are also factors which influence its use, and which are discussed next. The examination of these factors comes mostly from TransCom, due to the larger number of interviews carried out in that company.
Factors Impacting Enterprise system Use Cultural and Organizational Context In TransCom there was a global group responsible for the implementation and maintenance of the ERP system worldwide, named ERPGlobal. They interfaced with the UK via a local country group named ITUK. Within the sites examined in this research, there were some negative views about the ITUK team, as well as about ERPGlobal. As ERPGlobal wanted to keep the configuration and use of the ERP system as standard worldwide, it was very hesitant in carrying out updates to it, unless those would affect the majority of countries where the system was installed. Even in cases where updates were agreed by ERPGlobal however, those took a very long time to implement according to the users, and this was viewed negatively. However, as most users in the UK interfaced with the ITUK team directly, and not with ERPGlobal, most of the negative criticisms were directed towards the ITUK team. This tended to put the ITUK team in an awkward position, as they needed sanctioning from ERPGlobal to carry out user requests, but ERPGlobal was reluctant to give its consent in many matters in order to avoid deviation from standards regarding the configuration and use of SAP. As a result, many users who did not have interaction with ERPGlobal blamed
the ITUK team for being unresponsive to their needs, slow and inefficient. Some times there were tensions between users and the ITUK team, as well as between the ITUK team and ERPGlobal. In addition, the ITUK team claimed that they were also understaffed, with the result that they could not respond very quickly to user requests. This impacted training as well, and one common complaint from most users of the system was that they had not received enough training, and that either they did not understand how the system worked, or could not use it to its full potential. As a result, the ITUK team who was responsible for training was seen as quite irresponsible and unresponsive with regards to the training needs of the users. There were also cultural reasons which could influence the use of the system. For example, the implementation of SAP gave TransCom the opportunity to record and control time, and how long it actually took to do a job on a train. Nevertheless, the employees at the Manchester depot refused to do this when SAP was initially implemented at their site. On closer examination however, it was revealed that they would refuse to do this anyway, whether there was the SAP system or not. The intention of not recording time was a strong cultural aspect, but it was also local to the specific area. This refusal to record working times at the Manchester depot was in fact mentioned to be tied to contracts of employment and the influence of trade unions.
Resistance In addition, when the SAP system was installed in TransCom, there was resistance to the system at various levels. Part of middle managers and users did not use the system fully. In order to overcome this, initial training regarding the system was carried out, in order for the employees to understand what the system could offer. However, there was still resistance from middle managers to fully use SAP as a management tool. SAP was seen
1217
Enterprise Systems, Control and Drift
by many managers in TransCom as a financial overhead, and consequently there was not a lot of enthusiasm in supporting it. This also had on impact on the users of the system. Resistance was evident there, also because users were used to the old systems installed in TransCom, some of which were still operational. In that sense, there was resistance of SAP, also fuelled by the fact that many of the users were considered as technophobes, and were not used to computing systems. The main complaints about the system were then that the system was difficult to use, was not user friendly, and users did not understand what all the fields and text in SAP were used for. To this lack of knowledge and apprehension of the system also contributed the general lack of proper training. However, this type of resistance was considered normal, and seen to be overcome once the users felt more confident with the system: That’s just the normal resistance to a new system, because a lot of the people had other systems, which they knew and they were confident, and they knew what they could get out of them. And so it’s just this transition to another system that they’ve got to learn and understand. But I’m sure once they can understand it, once they can use it, they’ll be quite happy with it. (Reliability Group Leader, TransCom)
Implementation History The implementation of SAP was also the impetus for creating a new business unit within TransCom, whose aim was to stock and supply the depots in the UK (and especially the West Coast area) with spare parts for trains. This was important, as TransCom did not have a proper parts business to deal with spare parts for trains, and therefore it also did not have the systems to manage the spare parts.
1218
As a result, the business processes were written around the system, in order to make the system work. The system was conceived and implemented by a consultancy company, who looked at what the business strategy was, and the processes that TransCom required. They then tailored SAP to suit the business needs of the company at the time SAP was implemented, in terms of for example the order processes, purchasing, inventory, warehouse, and logistics. As a parts business did not exist before, people within TransCom did not know in detail how the business should be supported by the ERP system. The consultancy company therefore implemented their own ideas to a large degree, resulting in a system that was according to the views of many interviewees very inadequate. The consultancy company was thought to be responsible for this, having implemented the system: They [the consultancy company] thought they knew how to implement a spare parts management system in SAP, we did not have the expertise here in-house, which meant that they were always able to convince the people they were dealing with what the right answers were. And it was a very poor implementation. (Logistics Director, TransCom)
Power Differentials During the implementation of SAP some employees of TransCom were released in order to participate in the implementation and contribute their business experience towards the development of the system. Many of these people did not have any proper training at all in SAP, but got to know it from working together with the consultants. Although those TransCom employees were released back to the business when the implementation of SAP finished, their experience of the system gained during its implementation made them become perceived as SAP experts by other users of the system:
Enterprise Systems, Control and Drift
And whilst the consultants were here for 3 months helping other people load their data, I sat with the consultants, learning the system, saying, right, if I do this, how does that happen, how do I do this, and things like that. And just general questions, saying, right, there’s more information to get. One thing led to another, and I became one of the, sort of, SAP experts within the business. (Business Improvement Coordinator, TransCom) As a result of gaining SAP technical knowledge, the perceived SAP experts at TransCom became the first point of contact when users had problems with the system, as opposed to logging their problems with the ITUK team, and the ITUK team then trying to identify what the solution would be. The help of the perceived SAP experts was asked as they could go to the users’ desk quite quickly to see what the problem was, and explain to them what they were doing wrong. However, the business rules indicated that in case of problems users should be contacting the ITUK team in the first place. By bypassing those rules and seeking the help of the perceived SAP experts instead, the power of the latter was increased in the company, as they could influence the way that the ERP system was used by other users. This meant that the perceived SAP experts could direct the agency of other users in terms of the way they were using the system. If the directions given to the users were right, then the intended control from using the system would be re-enacted, if the directions given to them were flawed, then it would be possible that drift would be propagated, by the end users using the system incorrectly. Having described some factors that could influence the use of the ERP system, the next section discusses the various ways that organizational control from the use of the Enterprise System could be enacted. Results from all the companies examined are presented, although TransCom provided the biggest case study of this research.
Organizational control from Enterprise system Use The case analyses indicated that amongst the managerial intentions for installing an Enterprise System, and the purpose of its configuration, was to increase control of the company’s operations. Control was implemented with a variety of mechanisms in the Enterprise System. Those were the setting of access controls, the monitoring capabilities of the Enterprise System, the rules and procedures (process flow) that were embedded in the Enterprise System, and various checks carried out by the system.
Control from Access Profiles With regards to the access profiles that specified which types of users had access to different types of information in the system, it could happen that those controls were either too strict, or too lax. In this sense, there were two ways by which those access controls could be changed: The first was to redefine what a particular access level allowed a person to do, and the second was that an individual could be given a less or more open type of user access. In general however, the system controls implemented through access profiles were seen to be quite strict at NGlobe, making it difficult to bypass them: In terms of bypassing controls, the only ways to bypass a control are either to be given somebody else’s password, or to do it in cahoots with somebody else. (Cash Management and Treasury Process Director, NGlobe) At the time of the interviews there were many complaints from the users with regards to the setting of those profiles. Most of those complaints had to do with the limited access to screens and transactions in the system, when access was required, but was not given due to the incorrect setting of access levels. As one interviewee mentioned:
1219
Enterprise Systems, Control and Drift
That’s quite annoying for me, because I have the information that I need to put into SAP, I’ve got the information of how to do it, but I’m not given the access to it, because someone has made the decision that only one person in the company is allowed to do it. (Shift Planner, TransCom) On the other hand, users also pointed out that although needed access was unnecessarily limited in many cases, there were also many other cases where users were allowed to carry out tasks in the system, which they did not necessarily need to. This increased access to the system was mainly due to the inattention that was paid to the correct setting of the access profiles. The result was that the intended controls in the system were seen to be very lax in some cases, because people had authorizations to do many things outside their immediate area. In some cases users “abused” their increased access, because it was easier for them to do a transaction in the system themselves that they should not be doing, rather than asking the person that should be carrying out that transaction. As one interviewee mentioned: For example, if there is something to be posted in the material master, because they [users outside the Materials Management area] have got authorisation and they’ve got some knowledge of the material master, they think, right, OK, I’ll do it myself, rather than going to somebody who’s got better knowledge, and say, right, OK, can you add this so I can carry on with my processing. (Business Improvement Coordinator, TransCom)
Control from Monitoring Capabilities of the Enterprise System Our interviews indicated that the managerial intention of increased control through the use of an Enterprise System was also afforded by the monitoring capabilities of the Enterprise System.
1220
In this case the system tagged user actions in the system with information about who carried out a transaction. This could be examined if the need arose, and that was deemed beneficial by some of the interviewees: You can see who’s carried out the transaction, and you can take corrective action to understand why it happened. So, the system allows that visibility, to see where things have gone wrong, who’s made what transactions, who’s done what purchasing. (Purchasing Manager, SecSys) The monitoring capabilities of the Enterprise System also resulted in power differentials, as people who were allowed to see the actions of other users in the system, had authority over the latter. In some cases, this could lead to uncertainties about how the information obtained by monitoring user actions in the system was used. As some users of the system mentioned: I think people can use [monitoring] to their own advantage. Sometimes there’s too much information there, if you make a mistake or whatever, it can be used to the wrong advantage, which I am not too happy about… It’s there; it can be done. That’s the worrying thing (Shift Planning Coordinator, TransCom). The thing with SAP though, it’s the traceability. If we were trying to bypass the system, it’s all traceable. You can look into who did that, what changes you’ve made, they will come and hit you if you do try and bypass anything, because that’s one thing with SAP, it doesn’t lie, does it? It’s got to put your name against everything, every time. (Inventory Planner 1, TransCom) This apprehension with regards to monitoring of user actions in the system led some users to believe that anyone with the right access could go and monitor what other users were doing in the system, and this could be used to highlight areas
Enterprise Systems, Control and Drift
of inefficiencies with respect to the work carried out by them. This was seen quite negatively by those users, who considered the system to be the means of giving people the opportunity to spy on other people’s work, and then used for blaming them if something went wrong. This apprehension of the system was also confirmed in the case of recording hours for a particular job done on a train in TransCom. Although this required the inputting of individual hours for each user that worked on a train, the company was only interested in the total amount of time it took to do a job, rather than how much each individual spent in doing that job. However, users seemed hesitant to follow the recommended procedure of inputting hours worked in the system, as they could not see the business benefit of it, and were worried about how their data would be used: You have to record the individual’s clock number. And that in itself is problematic, because you are identifying a particular individual, and they may think, oh well, you know, that’s a bit too Big Brother, you’re watching my every movement. But at the end of the day, we’re not, as a business interested that it takes one man 20 minutes, but it takes another man 30 minutes to do a job. So, as a business we need to know that it took X man-hours for a particular job altogether. (SAP Facilitator 2, TransCom)
Control from Enterprise System’s Process Flow Our interviews also indicated that another way control was exercised through an Enterprise System was by the rules and procedures afforded by such a system. This meant that certain workflows were carried out in a certain order, and the output of one stage in the workflow was used as the input in the next stage. Therefore, the Enterprise System forced users in one department to complete their work before users in another department could start
with their own work. This mechanism of forcing users to work in a certain way was mentioned by many managers, characteristically: ERPs force you to undertake business in a certain way. And when I say force you, I mean the fact that you have to put a workflow in, to determine that these are the activities that need to be placed... Now you have a workflow, you put a control in. (Supply Chain Manager 2, TechniCom) This mechanism [of ERP process flow] enables us to ensure that nothing goes on to the lorries to be delivered, unless it’s also been flagged as having been physically finished within the factory. (IT Manager, FCom)
Control from System Checks SAP in TransCom was configured to carry out various checks that would ensure the quality and integrity of data in the system. For example, users were constrained from inputting an invoice twice. This could occur for example when users tried to create an invoice as a copy of the original one. In this case the system would check whether an invoice for the particular work and customer already existed, and come up with a warning about this. This would stop the user from entering a duplicate invoice, and was seen as a good control mechanism in SAP, in terms of minimizing duplication of data. Another example of checks carried out by the system was if VAT on invoices was entered (manually) incorrectly, which would result in an incorrect total balance in the end. This could happen more at month ends, when users at finance would be busy trying to input everything in the system. In this case mistakes would be more possible than other periods, and so the relative financial journals would not balance in SAP. The checking of the balance across various ledgers was done automatically by the clicking of a button,
1221
Enterprise Systems, Control and Drift
and if there were any mistakes SAP would come up with a warning saying that the changes could not be posted in the system because the relative journals would not balance. Other smaller-scale examples of checks carried out by the system also included the format of the fields entered (e.g. for numbers, dates, etc), the range of the values the field could accept, etc. Although the system provided those checks to ensure a level of quality of the data that was input into the system, in other cases the system was not capable of carrying out some checks that users deemed necessary. For example, when an item needed to be returned as broken to the supplier, it needed to be recorded into the system as such. However, there were no checks that the item was not accidentally booked into the good stock, although it was declared as broken in the system. The users in the workshops in this case deemed it necessary to have a warning message coming up that would inform them that they were trying to book broken parts into good stock. However, the system as it was did not carry out those checks, and would let the users carry on. This was actually an example of drift that could occur instead of control due to required checks not being supported by the enterprise system. In general however the four mechanisms described above (access profiles, monitoring capabilities, process flow, and system checks) aided the managerial intention to enhance control over the company’s operations. The control mechanisms then applied on the users of the system and the way they carried out their work. However, while users worked on the system, it was possible that some degree of drift could occur. Our analysis indicated that this might be due to the configuration of the system, which could allow the omission of important information, as well as possibly allowing the bypassing or workaround of controls by users; the use of systems outside the Enterprise System was also identified as conducive to drift. Each of these factors will be examined in turn below, with relevant evidence from the companies quoted.
1222
Organizational Drift from Enterprise system Use System Configuration Based on our analysis, it was evident that the actual configuration of the system was quite important in ensuring that the required controls were properly implemented. If the controls described in the previous section were not correctly configured, this could result in users taking actions in the system that they shouldn’t, and this could have a “ripple” effect to other departments. For example, as one interviewee mentioned, lax system controls allowed individuals to carry out transactions in the system that they were not allowed to, and this had an impact on various departments of the company: Somebody from the other sites went and built some transaction on my site the other day, which messed up my stock basically… [The impact of this transaction would be] financial, because the figures wouldn’t have matched. (Materials Controller 1, TransCom) However, the system could be intentionally configured to allow more lax controls, in order to cater for business necessities, for example: There are benefits to being able to do it, like for example, if say, the person who places the order in another site is off sick, and there’s only say, somebody here who can place the order, then that means that he can carry out the transaction. (Materials Controller 1, TransCom) In other cases, the affordance of the system could force to choose a configuration of the system (e.g. setting of access profiles) amongst a range of limited possibilities, which did not accurately reflect the needs of the company. In this case, drift could occur if the intention to use the system in other than the prescribed ways was there. As one interviewee mentioned:
Enterprise Systems, Control and Drift
Say for instance with the guys out the back who change bin locations, that means they’ve got access to the material master. So they can change it if they want to. But unfortunately you can’t separately set it so that they can only view it. They’ve got access to it or not. So if they wanted to, not that they would, but they have access to change anything. (Materials Controller 2, TransCom) The way that the system was configured, according to the business necessities and what was afforded by it, could result in users using it in other than the prescribed ways, by working around intended controls, or not inputting important information in the system. Those two reasons for drift are now presented below.
Bypassing or Workaround of Controls Our analysis indicated that organizational drift could arise from users bypassing the system controls, or working around them. The degree to which they could do this was enabled or constrained by the properties of the Enterprise System. It also depended on how the access profiles in the system were configured. For example, as one interviewee mentioned, although access to information on one screen was not allowed, the same information could be obtained by accessing an alternative screen: Now, if I go into Oracle inquiry for Accounts Receivable for example, I can see invoice information, but I can’t see credits in there... If I go into Business Objects with my inquiry access, it runs invoices and credits and it does so pretty happily. So I can get them that way. (Cash Management and Treasury Process Director, NGlobe) Although there might not have been intention in doing so, controls could still be bypassed unconsciously, because it was easier to bypass them than follow the prescribed ways of working with the system. For example, although access controls
could be imposed with the use of usernames and passwords, if those were shared by the employees then the intended controls lost their meaning: Everybody has their own login. But if say there were 4 or 5 of us on duty today, somebody wouldn’t come to it and log on, we’d use whatever was in, just for speed. I suppose really, if I walk away from it, I should log off, and the next person who comes would log in. But it just takes time to keep logging off and logging in when you’re busy. (Materials Planner, TransCom) Using one log-in for everybody in the workshop essentially meant that the intended controls in the system were bypassed by the users. If a generic logon approach was followed, then it would be difficult to tell who was or was not using the system. It would also be impossible if the need arose to identify in the system which user actually carried out a transaction. In addition, in the implementation of SAP at TransCom there existed a “source list”, which contained the suppliers from which the company could buy items required for the maintenance and repairing of trains. The source list existing in the system was created and maintained by the system administrators at ITUK, and a block was put in the system to disallow other users from amending it, so that only authorised suppliers identified in the source list could be used to buy materials from. However, users identified a way to bypass this, by creating a new source list in the system, rather than using the one created for them by the ITUK team. This meant that they could effectively include any supplier in their own source list, and buy from any of them, without reference to the approved suppliers in the ITUK-maintained source list.
Missing Information From the interviews carried out we realised that users could choose not to enter important infor-
1223
Enterprise Systems, Control and Drift
mation in the system. This might be due to lack of this information, or users resisting the system by not inputting this information. It could also be because users didn’t understand the workings of the system, and the impact that their inaction of inputting that information had on other departments and the company as a whole. If important information was missing in the system, negative organizational consequences were certain to occur. In this case, money might be lost, as one interviewee indicated: There are vehicles coming to our depot, and we don’t know who’s sending them. There’s no contract set up for them, so the commercial department can’t allocate working hours… We did quite a lot of work for these customers, but we don’t have any idea who to bill, or how to get the money back, or how much. (Production Planner, TransCom) One way to overcome the effects of missing information in the system was by introducing mandatory fields, or using the affordances of the system to carry out checks that this information was there. However, although appealing this approach might seem, there were practical considerations that would make it unworkable: If it was for me, I would say, make all the fields mandatory, and you have to fill them all in. In the real world, you could never do that. There are time constraints for one, availability of information, two. So you would have the end user who wouldn’t use the system. (SAP Facilitator, TransCom) The quality of information that was input into the system was generally recognised as very important for the correct functioning of the system: I think the big problem we’ve got with SAP at the moment is the quality of data being entered. I think there is a perception that nobody is using SAP, for once the information is entered, it’s forgotten, it’s never used again. And therefore a lot of the
1224
information that gets put in is completely rubbish. And that can only be changed by the people using it, there’s nothing SAP could do to make that any better, that’s entirely human. (Reliability Group Leader, TransCom) In addition to the issue of the quality of information that was input into the system, the timing of inputting that information was also important. For example, due to the vast number of jobs done on trains in TransCom and the associated number of service orders, work on a train could start before the relevant service order was created in SAP. If this mistake was realised later on the same day or the next day, this could be too late, as the train would be repaired by then, and left the depot. This would then mean that it would be difficult to identify without any records what work was done on a train. Even worst, if this mistake was not found at all and the service order was not created, then the train would have been repaired for free, without charging the customer for it.
Use of External Systems In all of the companies examined, most of their reporting requirements were carried out by extracting data from the system and inputting them for manipulation into Excel. In SecSys for example, a key process within the supply chain was customer order satisfaction, examining where the company was able to satisfy customer orders within the delivery time that the customer wanted. The relevant data was extracted from SAP into Excel in this case, in order to produce a management report that was presented on a monthly basis to senior management. Similarly in TransCom, some of the reports that were running included for example the number of open and closed service orders in a given period, the number of purchase invoices processed per site, number of materials issued, received or transferred in a certain period, etc. Although some of the reports taken directly out of the system would give all the information
Enterprise Systems, Control and Drift
that was needed, the layout would generally be not right for showing those reports to management; therefore the need to manipulate these data in Excel. Excel in this case was considered more advanced, in terms of being able to summarize and carry out calculations on the data, produce graphs, manage the layout, etc. However, by taking data out of the ERP system and manipulating them in another piece of software (Excel) that did not impose access restrictions, data could be (intentionally or unintentionally) falsified, producing erroneous results. If this occurred, then control over the accuracy and truthfulness of those data would be lost, and drift would occur: You have that functionality, it’s very good, you can export it [data from the system]. The problem is, that you can then manipulate the data [in Excel] into any way you want, you know, anyhow you want. And for me, that’s potential loss of control, because, OK, if you imagine, 2 groups of people are producing the same data in theory, manipulating it slightly differently, and potentially you turn up with 2 individuals at the same meeting, with 2 different sets of data. (SAP Facilitator 2, TransCom) The use of Excel was also not always a panacea for the production of reports. Users had to know Excel quite well, in order to be able to manipulate the data coming out of the Enterprise System. In the Materials Management area in TransCom for example, the output from more than one areas of SAP had to be extracted at a time, and each of those outputs had to be entered into an Excel spreadsheet and then combined using specialist functions such as vlookup. This was seen to be too complicated and time-consuming to do, distracting the users from the main job that they should be doing, rather than having to learn complex functions in Excel to manipulate the data that were taken out of the system.
DIscUssION AND tHEOry DEVELOPMENt In this section we draw on the analysis presented above on the ways control and drift can occur, as well as our theoretical concepts of disembedding / reembedding and human and machine agencies, in order to develop our theoretical conceptualisation that explains the processes that lead to control and drift within the context of an Enterprise System.
Human and Machine Agencies The actual use of the system in the analysis of the five companies can be conceptualised by the interplay between human and machine agencies. A human agency position in the use of the Enterprise System in this case assumes that humans (Enterprise System users in this case) could choose to use the system minimally, invoke it according to needs, or improvise in ways that could produce unanticipated consequences (Boudreau & Robey, 2005; Orlikowski, 2000). On the other hand, machine agency is assumed to embody rules guiding human action, limiting choice alternatives and monitoring human action (Boudreau & Robey, 2005; Winner, 1977; Zuboff, 1988). Although machines themselves are products of human agency (when the development stage is considered), when they are installed and left to operate they then become constraints or enablers of human agency (Boudreau & Robey, 2005; Nandhakumar et al., 2005; Rose et al., 2003). In other words, the assumption in the handling of agency in this chapter is that human (user) agency is characterised by intentionality to perform certain actions in the system, while machine (system) agency is characterised by affording (enabling or constraining) the intended human actions in the system. This position follows amongst others the works of Jones (1999), Rose and Jones (2004) and Rose et al. (2003), who argue that by defining agency as the “capacity to make a difference” (Giddens, 1984), both machines
1225
Enterprise Systems, Control and Drift
and humans can be viewed as possessing agency. However, humans have intentionality and self and social awareness, which result in capabilities for interpretation of particular situations and actions. Machines however lack those capabilities (at least to a large degree relative to humans and based on current technologies).
Embedding-Disembedding of Human Agency In this chapter we argue that human agency can also be further elaborated into the disembedding and reembedding of user actions in the system. This refers to the ways that actions of one user are disembedded to other users, and these actions then being reembedded by them as input for their own work in the system. The disembedding of actions in this case can be linked to data input by users in the system, and the reembedding of actions can be linked to data use by users of the system. Disembedded actions refer to users in one department using the system to carry out a transaction (such as finance giving authorisation to do the stock count). This action then gets disembedded for example from the finance area to other departments or users concerned (for example the Materials Management area). This disembedded action then gets reembedded locally (in this case by the Materials Management users) in terms of being able to use it as input for their own work (in this case carrying out the stock count). The disembedding and reembedding of human actions in the system can be seen to be facilitated by the workflow that is inscribed in the Enterprise System, and the single database that enables monitoring of user actions. Authorisations for the disembedding and reembedding of information are then made possible with the use of access profiles that specify which users could disembed and reembed various pieces of information in the system. Because of the workflow inscribed in the ERP system, the disembedding and reembedding of information could be seen to
1226
operate in cycles, i.e. the output of one phase of the workflow being used as the input in the next phase of the workflow. This would also mean that an action of a user in the system would impact other users of the system, in a cause and effect, or action/consequence manner. However, we argue that the employment of the disembedding/reembedding concepts goes a step further from simply describing cause and effect. Due to the global and integrative nature of the Enterprise System, the action of a user in the system is not simply the cause for another action by another user, but also entails visibility across geographical, functional and time dimensions. This means that the disembedded action in the system is recorded, and could be examined by authorised users in other locations of the company, in other functional areas, or in the future if need arose. This action therefore then ceases to be local, but is “disembedded” across time and space. As discussed, the disembedding and reembedding of human actions in the system can be seen to be facilitated by the machine agency of the Enterprise System. For example, the fact that the system consists of a single global database enables the user actions to be disembedded in terms of being visible to other users. The single database also enables other users to view disembedded actions in the system and reembed them, in terms of using them as the input for their own work. The disembedding and reembedding of human actions can also be seen to be facilitated by the workflow and access controls in the ERP system, which specify which actions users could take in the system in order for them to be disembedded, and to which other users those actions would impact upon, in terms of those actions being reembedded by them. When the user actions in the system are disembedded however, any errors, omissions or inconsistencies by the user are disembedded as well. This could have a “ripple” effect to the company’s operations as a whole, and therefore drift might occur. As has been discussed in the analysis section, this drift might occur because of
Enterprise Systems, Control and Drift
the system configuration, which could allow for missing information in the system, the bypassing of controls by users, or the use of external systems such as Excel.
conceptualisation of Enterprise system Use and Organizational consequences (control and Drift) The figure below presents our conceptualisation of the organizational impact (classified as control or drift) of the use of Enterprise Systems, according to the data we gathered from the case study companies and the theoretical frameworks that we employed.
Although the factors from Enterprise System use in Figure 1 are shown to lead to organizational control and/or drift, this result is in essence the interpretation of the researcher according to the unique context where these factors are identified. It can be argued for example however, that the same factors could be interpreted in another way in a different context. For example, while missing information was interpreted as resulting in organizational drift, it could be argued that such missing information was the outcome of users not inputting the required information in order to signal problems with the system or data incompatibilities, and this could be viewed as a control factor (e.g. Kavanagh, 2004; Marakas & Hornik, 1996; Markus, 1983).
Figure 1. The impact of enterprise systems on control and drift: A human and machine agency perspective through the embedding and disembedding of human actions
1227
Enterprise Systems, Control and Drift
Similarly, with the case of the monitoring of user actions afforded by the Enterprise System which was interpreted to lead to organizational control, in other contexts it could potentially lead to users resisting the system. In this sense, some users in TransCom have expressed their perception of the visibility offered by the ERP system as a kind of “Big Brother”. Although these users were subsequently educated and trained to use the system, the visibility functionality of the ERP system could otherwise potentially result in user resistance which would be detrimental to the company (Kossek et al., 1994; Martinko et al., 1996), and which would therefore cause organizational drift as far as the use of the system is concerned. As such, the interpreted nature of the factors from ERP use leading to either organizational control or drift must be mentioned, which depends on the context where these factors are encountered. The importance of this conceptualisation then does not lies in enumerating the various factors affecting ERP use, as well as the factors from ERP use leading to either control or drift. The importance of the conceptualisation lies more in sensitizing the reader with regards to the need to examine the context where an Enterprise System is used, as well as the actual use of the system and its outcome in terms of organizational control and drift. This means that when examining the use of an ERP system, the factors impacting its use must be examined, as well as the affordances of the system, the actions of users in the system, and the interdependencies between actions of different users (through the disembedding and reembedding of those actions).
cONcLUsION AND IMPLIcAtIONs The aim of this chapter has been to present and theorize the impact of actions related to an Enterprise System on organizational control and drift. We have taken an agency perspective, where humans (users) and machine (the Enterprise System)
1228
both possess and exhibit agency. We have based our conceptualisation on the assumption that the Enterprise System acts by affording (enabling or constraining) states of control and drift. We have also based our conceptualisation on the assumption that human actions are characterised by intentionality, and have shown that those actions can be embedded and disembedded, consequently having an impact on organizational control and drift. Our findings draw upon and complement the findings by Hanseth et al. (2001), where side effects can lead to loss of control (drift) due to the integrative nature of an ERP, through which these side effects are “rippled”. We have enhanced this understanding by arguing that drift can occur through the disembedding and reembedding of user actions in the system (see Figure 1). Our results also refine Boudreau & Robey (2005), who argue that although ERPs can constrain human agency, this can still take place within such systems. We argue that this exercise of human agency from the part of users can lead to drift, as shown in Figure 1. Finally, we argue that the concept of panoptic empowerment as identified by Elmes et al. (2005), occurs through the disembedding and consequent reembedding of user actions in the system. For practitioners, our findings indicate that attention needs to be paid to the setting of access profiles, according to what is afforded by the system. Attention should also be paid to the actual configuration of the system, to minimize the bypassing of controls and the unwanted or unauthorised use of the system, which can lead to drift. The importance of inputting required information in the system should also be emphasized to employees, although the system may not force users to enter it. This also links to user training in order to increase the understanding of the system. If users don’t understand how the system works and how their actions in the system impact other users, then errors, omissions or inconsistencies will keep occurring and the effects of those will be “rippled” across the company, resulting in drift.
Enterprise Systems, Control and Drift
On the other hand, too tight controls in the system can reduce the resiliency of the company to respond to future challenges and adapt to new realities (Ignatiadis & Nandhakumar, 2005). In that sense, some degree of calculated drift may be beneficial, as long as it is not caused by inadvertent or uninformed user actions. The thin line between control and drift is different for each company, and depends on factors internal and external to it. Our model presents an understanding of the processes that lead to control and drift, so that managers can tailor these processes according to the idiosyncrasies of their companies.
rEFErENcEs Akkermans, H., & van Helden, K. (2002). Vicious and virtuous cycles in ERP implementation: a case study of interrelations between critical success factors. European Journal of Information Systems, 11(1), 35–46. doi:10.1057/palgrave/ejis/3000418 Al-Mashari, M., & Al-Mudimigh, A. (2003). Enterprise Resource Planning: A taxonomy of critical factors. European Journal of Operational Research, 146(2), 352–364. doi:10.1016/S03772217(02)00554-4 Bernroider, E., & Koch, S. (1999). Decision making for ERP investments from the perspective of organizational impact - Preliminary results from an empirical study. Paper presented at the 5th Americas conference on information systems, Milwaukee, WI, USA. Besson, P., & Rowe, F. (2001). ERP project dynamics and enacted dialogue: perceived understanding, perceived leeway, and the nature of task-related conflicts. The Data Base for Advances in Information Systems, 32(4), 47–65. Bingi, P., Sharma, M. K., & Godla, J. (1999). Critical Issues Affecting an ERP Implementation. Information Systems Management, 16(3), 7–14. doi:10.1201/1078/43197.16.3.19990601/31310.2
Bloomfield, B. P., & Coombs, R. (1992). Information Technology, Control and Power: The Centralization and Decentralization Debate Revisited. Journal of Management Studies, 29(4), 459–484. doi:10.1111/j.1467-6486.1992.tb00674.x Bloomfield, B. P., Coombs, R., & Owen, J. (1994). The social construction of information systems the implications for management control. In R. Mansell (Ed.), Management of Information and Communication Technologies - Emerging Patterns of Control (pp. 143-157). London: Aslib. Boudreau, M.-C., & Robey, D. (1999). Organizational transition to enterprise resource planning systems: theoretical choices for process research. Paper presented at the 20th International Conference on Information Systems, Charlotte, North Carolina, United States. Boudreau, M.-C., & Robey, D. (2005). Enacting Integrated Information Technology: A Human Agency Perspective. Organization Science, 16(1), 3–18. doi:10.1287/orsc.1040.0103 Ciborra, C. U. (2000). A Critical Review of the Literature on the Management of Corporate Information Infrastructure. In C. U. Ciborra, K. Braa, A. Cordella, B. Dahlbom, A. Failla, O. Hanseth, V. Hepso, J. Ljungberg, E. Monteiro & K. A. Simon (Eds.), From Control to Drift - The Dynamics of Corporate Information Infrastructures (pp. 1540). Oxford: Oxford University Press. Ciborra, C. U. (2002). The Labyrinths of Information: Challenging the Wisdom of Systems. Oxford: Oxford University Press. Coombs, R., Knights, D., & Willmott, H. C. (1992). Culture, Control and Competition; Towards a Conceptual Framework for the Study of Information Technology in Organizations. Organization Studies, 13(1), 51–72. doi:10.1177/017084069201300106
1229
Enterprise Systems, Control and Drift
Duane, A., & Finnegan, P. (2003). Managing empowerment and control in an intranet environment. Information Systems Journal, 13(2), 133–158. doi:10.1046/j.1365-2575.2003.00148.x
Ignatiadis, I., & Nandhakumar, J. (2007). The impact of Enterprise Systems on organizational resilience. Journal of Information Technology, 22(1), 36–43. doi:10.1057/palgrave.jit.2000087
Elmes, M. B., Strong, D. M., & Volkoff, O. (2005). Panoptic empowerment and reflective conformity in enterprise systems-enabled organizations. Information and Organization, 15(1), 1–37. doi:10.1016/j.infoandorg.2004.12.001
Jones, M. R. (1999). Information Systems and the Double Mangle: Steering a Course between the Scylla of Embedded Structure and the Charybdis of Strong Symmetry. In T. J. Larsen, L. Levine, J. I. DeGross (Eds.), Information Systems: Current Issues and Future Changes (pp. 287-302). New York: OmniPress.
Gattiker, T. F., & Goodhue, D. L. (2004). Understanding the local-level costs and benefits of ERP through organizational information processing theory. Information & Management, 41(4), 431–443. doi:10.1016/S0378-7206(03)00082-X Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin. Giddens, A. (1984). The Constitution of Society: Outline of the Theory of Structuration. Cambridge: Polity Press. Giddens, A. (1990). The Consequences of Modernity. Cambridge: Polity Press. Glaser, B. G., & Strauss, A. (1967). The discovery of grounded theory: Strategies for qualitative research. New York: Aldine. Hanseth, O., Ciborra, C. U., & Braa, K. (2001). The Control Devolution: ERP and the Side Effects of Globalization. The Data Base for Advances in Information Systems, 32(4), 34–46. Holland, C. P., & Light, B. (1999). A critical success factors model for ERP implementation. IEEE Software, 1999, 30–36. doi:10.1109/52.765784 Hong, K. K., & Kim, Y. G. (2002). The critical success factors for ERP implementation: an organizational fit perspective. Information & Management, 40, 25–40. doi:10.1016/S03787206(01)00134-3
1230
Kallinikos, J. (2004). Deconstructing information packages: Organizational and behavioural implications of ERP systems. Information Technology & People, 17(1), 8–30. doi:10.1108/09593840410522152 Kavanagh, J. F. (2004). Resistance as Motivation for Innovation: Open Source Software. Communications of the Association for Information Systems, 13, 615–628. Koch, C. (2001). BPR and ERP: Realising a vision of process with IT. Business Process Management Journal, 7(3), 258–265. doi:10.1108/14637150110392755 Kossek, E. E., Young, W., Gash, D., & Nichol, V. (1994). Waiting for Innovation in the Human Resources Department: Godot Implements a Human Resource Information System. Human Resource Management, 33(1), 135–160. doi:10.1002/ hrm.3930330108 Legare, T. L. (2002). The role of organizational factors in realizing ERP benefits. Information Systems Management, 19(4), 21. doi:10.1201/1 078/43202.19.4.20020901/38832.4 Malone, T. W. (1997). Is Empowerment Just a Fad? Control, Decision Making, and IT. Sloan Management Review, 38(2), 23–35.
Enterprise Systems, Control and Drift
Marakas, G. M., & Hornik, S. (1996). Passive Resistance Misuse: Overt Support and Covert Recalcitrance in IS Implementation. European Journal of Information Systems, 5(3), 208–220. doi:10.1057/ejis.1996.26 Markus, M. L. (1983). Power, Politics, and MIS Implementation. Communications of the ACM, 26(6), 430–444. doi:10.1145/358141.358148 Markus, M. L., & Tanis, C. (2000). The Enterprise Systems experience - From adoption to success. In R. W. Zmud (Ed.), Framing the domains of IT management: Projecting the future through the past. Cincinnati, OH: Pinnaflex Educational Resources, Inc. Martin, I., & Cheung, Y. (2000). SAP and Business Process Reengineering. Business Process Management, 6(2), 131–121. doi:10.1108/14637150010321286 Martinko, M. J., Henry, J. W., & Zmud, R. W. (1996). An attributional explanation of individual resistance to the introduction of information technologies in the workplace. Behaviour & Information Technology, 15(5), 313–330. doi:10.1080/014492996120094 Miles, M. B., & Huberman, A. M. (1994). Qualitative Data Analysis (2nd ed.). Thousand Oaks, California: Sage Publications. Nah, F. F., Lau, J. L., & Kuang, J. (2001). Critical factors for successful implementation of enterprise systems. Business Process Management Journal, 7(3), 285–296. doi:10.1108/14637150110392782 Nandhakumar, J., Rossi, M., & Talvinen, J. (2005). The dynamics of contextual forces of ERP implementation. The Journal of Strategic Information Systems, 14(2), 221–242. doi:10.1016/j. jsis.2005.04.002
Newell, S., Huang, J. C., Galliers, R. D., & Pan, S. L. (2003). Implementing Enterprise Resource Planning and knowledge management systems in tandem: fostering efficiency and innovation complementarity. Information and Organization, 13(1), 25–52. doi:10.1016/S14717727(02)00007-6 Norman, D. A. (1988). The Psychology of Everyday Things. New York: Basic Books. Orlikowski, W. J. (1991). Integrated Information Environment or Matrix of Control? The Contradictory Implications of Information Technology. Accounting . Management and Information Technologies, 1(1), 9–42. doi:10.1016/09598022(91)90011-3 Orlikowski, W. J. (2000). Using Technology and Constituting Structures: A Practice Lens for Studying Technology in Organizations. Organization Science, 11(4), 404–428. doi:10.1287/ orsc.11.4.404.14600 Orlikowski, W. J., & Baroudi, J. J. (1991). Studying Information Technology in Organizations: Research Approaches and Assumptions. Information Systems Research, 2(1), 1–28. doi:10.1287/ isre.2.1.1 Pickering, A. (1995). The Mangle of Practice: Time, Agency and Science. Chicago: University of Chicago Press. Rose, J., & Jones, M. R. (2004). The Double Dance of Agency: a socio-theoretic account of how machines and humans interact. Paper presented at the ALOIS Workshop: Action in Language, Organizations and Information Systems, Linkoping, Sweden. Rose, J., Jones, M. R., & Truex, D. (2003). The problem of agency: How humans act, how machines act. Paper presented at the ALOIS Workshop: Action in Language, Organizations and Information Systems, Linkoping University, Linkoping, Sweden.
1231
Enterprise Systems, Control and Drift
Rose, J., & Truex, D. (2000). Machine agency as perceived autonomy; an action perspective. In R. L. Baskerville, J. Stage & J. I. DeGross (Eds.), Organizational and Social Perspectives on Information Technology (pp. 371–390). Aalborg, Denmark: Kluwer. Schrnederjans, M. J., & Kim, G. C. (2003). Implementing enterprise resource planning systems with total quality control and business process reengineering survey results. International Journal of Operations & Production Management, 23(3/4), 418–429. doi:10.1108/01443570310467339 Shang, S. S. C., & Seddon, P. B. (2000). A comprehensive framework for classifying the benefits of ERP systems. Paper presented at the Americas Conference on Information Systems, Long Beach, California. Shanks, G., Parr, A., Hu, B., Corbitt, B., Thanasankit, T., & Seddon, P. B. (2000). Differences in critical success factors in ERP systems implementation in Australia and China: a cultural analysis. Paper presented at the European Conference on Information Systems, Vienna, Austria. Sia, S. K., Tang, M., Soh, C., & Boh, W. F. (2002). Enterprise Resource Planning (ERP) Systems as a Technology of Power: Empowerment or Panoptic Control? The Data Base for Advances in Information Systems, 33(1), 23–37.
Somers, T., & Nelson, K. (2001). The Impact of Critical Success Factors across the Stages of Enterprise Resource Planning Implementations. Paper presented at the Hawaii International Conference on Systems Sciences. Tang, M., Sia, S. K., Soh, C., & Boh, W. F. (2000). A Contingency Analysis of Post-bureaucratic Controls in IT-related Change. Paper presented at the 21st International Conference on Information Systems, Brisbane, Queensland, Australia. van Fenema, P. C., & van Baalen, P. J. (2005). Strategies for Dealing with Drift during Implementation of ERP Systems. Rotterdam: Erasmus Research Institute of Management. Walsham, G. (1993). Interpreting Information Systems in Organizations. Chichester: John Wiley & Sons. Walsham, G. (1995). The Emergence of Interpretivism in IS Research. Information Systems Research, 6(4), 376–394. doi:10.1287/isre.6.4.376 Winner, L. (1977). Autonomous Technology. Technics-out-of-Control as a Theme in Political Thought. London: MIT Press. Zuboff, S. (1988). In the Age of the Smart Machine: The Future of Work and Power. Oxford, UK: Heinemann Professional Publishing.
Siriginidi, S. R. (2000). Enterprise Resource Planning in Reengineering Business. Business Process Management Journal, 6(5), 376–391. doi:10.1108/14637150010352390 This work was previously published in Global Implications of Modern Enterprise Information Systems: Technologies and Applications, edited by Angappa Gunasekaran, pp. 317-343, copyright 2009 by Information Science Reference (an imprint of IGI Global).
1232
1233
Chapter 5.4
The Impact of Enterprise Systems on Business Value Sanjay Mathrani Massey University, New Zealand Mohammad A. Rashid Massey University, New Zealand Dennis Viehland Massey University, New Zealand
AbstrAct
INtrODUctION
A significant investment in resources is required for implementation of integrated enterprise systems as technology solutions while the effectiveness of these systems to achieve business value remains unclear and empirically largely unexplored. Enterprise systems integrate and automate business processes, but unarguably, the real business value can only be achieved from improvements through the transformation of enterprise systems data into knowledge by applying analytic and decision making processes. This study explores a model of transforming ES data into knowledge and results by comparing two case studies that examine the impact of enterprise systems information on organizational functions and processes leading to realization of business value.
The implementation of enterprise systems has been considered the most important development in corporate use of information technology (Davenport, 1995, 1998; Davenport & Prusak, 1998; Deloitte, 1998). Enterprise systems (ES) broadly include all enterprise-wide systems. These include enterprise resource planning (ERP) systems or any extended modules such as supply chain management (SCM) or customer relationship management (CRM) systems. However, despite a few dramatic successes, many organizations still fail to realize the benefits while incurring huge costs and schedule overruns (Dalal, Kamath, Kolarik, & Sivaraman, 2004). It has been estimated that half of all ES implementations fail to achieve the desired results (Jarra, Al-Mudimigh, & Zairi, 2000). In most cases enterprise systems are
DOI: 10.4018/978-1-59904-859-8.ch010
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Impact of Enterprise Systems on Business Value
implemented to improve organizational effectiveness (Davenport, 1998, 2000; Marcus & Tanis, 2000). These software applications connect and manage information flows across complex organizations, allowing managers to make decisions based on information that accurately reflects the current state of their business (Davenport, Harris, & Cantrell, 2002). These systems are implemented to bring about definite business benefits that justify the investment. Truly significant return on investment (ROI) comes from the process improvements that ES supports and not just from improved information access. In most implementations, ES software alone makes marginal improvement in business performance. If organizations continue to follow the same pre-ES business processes after implementation, they can expect the same or possibly worse performance. ES software can, however, enable and support many new and improved processes, but not without the organization deciding what those processes are and committing to them. Positive ROI can come from changing the way business was performed in the past to more streamlined, faster and lower cost processes that better serve the needs of the customer and, if that is done well, the organization will be a winner (Donovan, 2003). The focus of this chapter is to better understand the effectiveness of enterprise systems technology in an organizational setting. A qualitative research methodology is used to explore how firms can leverage ES technologies to realize improved business value. Field studies were conducted in two large manufacturing organizations in India that have implemented ESs, in order to understand their experience in achieving growth by leveraging data from their ES investment. Semi-structured interviews were conducted with senior managers of the two organizations between January 2005 and February 2006. The empirical data were integrated and analysed to formulate inferences presented in this paper. Both organizations had aggressive growth plans with an objective to achieve better
1234
penetration capability into the competitive market by improving their operations. Both organizations had implemented ES for at least three years and so were in their mature phase of implementation. One organization had achieved considerable success from their ES implementation whereas the other had achieved little success. The two cases are compared to identify reasons for their levels of success.
bUsINEss bENEFIts The justification for adopting ES centres on anticipated business benefits from the ES. To receive benefit from an ES, there must be no misunderstanding of what it is about, its usability, and, even more importantly, organizational decision makers must have the background and temperament for this type of decision making coupled with the right quality of information (Donovan, 1998). Many researchers have evaluated benefits from ES investments (Cooke & Peterson, 1998; Davenport et al., 2002; Deloitte, 1998; Donovan, 1998, 2001; Hedman & Borell, 2002; Ittner & Larcker, 2003; Shang & Seddon, 2000; Yang & Seddon, 2004). These studies have found that ES’s are designed to help manage organizational resources in an integrated manner. Furthermore, the level of integration that is promoted across functions in an enterprise closely relates to the primary benefits that are expected as a result of their implementation. After adoption, improved business performance should produce both operational and strategic benefits (Irving, 1999; Jenson & Johnson, 2002; Nicolaou, 2004). A study of 85 global companies (Deloitte, 1998) found tangible benefits (e.g. cost savings, faster processing) and intangible benefits (e.g., improved information visibility, new/improved processes, and improved customer responsiveness) from ES implementation. In a survey of 163 large firms (Davenport et al., 2002) key benefits realized by organizations adopting ES included
The Impact of Enterprise Systems on Business Value
better management decision making, improved customer service and retention, ease of expansion/ growth, increased flexibility, faster and more accurate transactions, cost reduction, and increased revenue. There have also been some studies on organizational benefits resulting from overlapping implementation of knowledge management (KM) and ES in organizational settings (e.g., Newell, Huang, Galliers, & Pan, 2003; Baskerville, Pawlowski, & McLean, 2000; Bendoly, 2002). However, this paper focuses on the impact of ES on realizing business value through the process of ES data transformation into knowledge by applying analytic and decision-making processes. This study attempts to establish the link between data, decisions, and actions, its impact on functional and business processes, and their outcomes.
tUrNING Es DAtA INtO Es KNOWLEDGE A model conceptualized and used by Davenport (2000) and his team of researchers for turning ES data into ES knowledge is shown in figure 1. The model comprises three major steps. The first is the context. This includes the factors that must be present for transformation of ES data into knowledge and results. The second is transformation of ES data into knowledge which takes place when the data are actually analyzed and then used
to support a business decision. The third are the outcomes which are the events that change as a result of the analysis and implementation of the decisions made. As Davenport’s model shows, the process of ES data transformation into knowledge leads into organizational changes. The most basic potential outcome of this process is the changes in behaviors of individual managers, employees, customers, suppliers, and all stakeholders in the value chain. Another outcome from the decisions or the behavioral changes may be new initiatives to bring about improvements in business or make changes in existing projects. The results of decisions can also include process changes– determining that an existing process is not working effectively can lead to changes in the existing process or design and implementation of an entirely new process. The ultimate results of all these activities are the business benefits which lead to positive financial impacts for the organization. “Decisions lead to new behaviors, new initiatives, and processes, which do not matter unless they improve the bottom line and the return to share holders” (Davenport, 2000) p. 225). It may be difficult to draw a direct chain of influence from prerequisites to transformation to non-financial outcomes to financial results, but establishing that linkage should be the objective of an organization that invests effort and resources in ES data transformation (Davenport, 2000) and is the focus of this study.
Figure 1. A model of how ES data are transformed into knowledge and results
1235
The Impact of Enterprise Systems on Business Value
The two case studies are presented next and the discussion follows Davenport’s model- context, transformation, and outcomes for both cases as shown in Figure 1.
cAsE stUDy 1 company Overview Growel Limited1 is a U.S. $1 billion forging manufacturing company and is one of the world’s largest manufacturers and exporters of automotive engine and suspension components. It has the world’s largest single-location forging capacity and one of the most technologically advanced commercial forge shops in the world. Growel has been a publicly traded company whose stock has appreciated more than 200 percent since March 2004. With manufacturing facilities in India and Germany, the company manufactures a wide range of forgings and machined components for the automotive, diesel engine, railway, earthmoving, cement, sugar, steel, coal, ship building, and oilfield industries. An ISO 9001:2000, ISO/TS 16949:2002 accredited company, Growel is internationally reputed for its cutting edge technology, established quality processes, and capabilities to meet the exacting standards of the most demanding customers in the world. Growel Limited is a global corporation with world class engineering capabilities, state-of-the-art manufacturing facilities, and a global customer base that includes General Motors, Toyota, Ford, Daimler Chrysler, Honda, Renault, Volvo, Caterpillar – Perkins, and several others. It is the largest manufacturer of axle components for heavy trucks and has a 35% global market share, with a 10% global market share in engine components. The following sections discuss how Growel leveraged knowledge-driven technologies to improve business dynamics with considerable success. The discussion follows Davenport’s model – context, transformation, outcomes – as shown in Figure 1.
1236
context: strategic Growel’s journey towards becoming an international e-business began in the late 1990’s. The company wanted to grow exports, widen its global footprint, secure new customers, become the world’s largest manufacturer of axle components, and be a key global provider of engine components. To achieve these goals, Growel planned to double its manufacturing capacity by implementing a major capacity expansion program. However, this involved large investments and risks. The company had to resort to major cost controls, improve operational efficiencies and optimize its business processes to counter the adverse financial effects of the major investments. Senior managers in the company decided to pursue a strategy of operational excellence. The company historically lacked integration between its order-to-cash, shipping, and accounts receivable processes. There were disputes on invoices and purchase orders relating to price and terms of business. There was a lack of visibility into finished goods inventory and overall accuracy of inventory was poor. Visibility of material requirements and inventory throughout the value chain was inadequate and did not provide decision support at all stages of operations. The company had not integrated the design and development practices with the operational systems; therefore the time lag between development and marketing of products was large and resulted in poor customer service and dissatisfaction. The pre-production approval process was another aspect which required attention. The company needed the ability to interactively participate with its customers at the early stages of product development and avoid rework at a later stage. The product forecasting process also required improvements. Managers only discovered that they had a shortage of manufacturing capacity when the line ran out. On the sales side, management had limited visibility of who its most and least profitable customers and products were. They
The Impact of Enterprise Systems on Business Value
also did not have information on whether they were buying in the most cost effective manner. The management team recognized what types of decisions had to be made to support their strategic objectives but could not utilize the available operational data.
limitations for analysis. They also have a thorough working knowledge of several analytic and data presentation software packages, along with strong interpersonal skills to train and support end users.
context: Organizational and cultural
Issues of data quality were less at Growel, where transaction data captured in the SAP system were created internally based on all transactions from sales orders to shipping invoices. Monitoring and updating of data was a regular feature at Growel and the transaction data was made available in a timely fashion to support decision making.
A company is as good as its people and Growel has the advantage of having a highly qualified and motivated manpower base. Since its inception, Growel has attached great significance to “people power” and considers its employees as important assets. With interactive communications at all levels, Growel continues to provide a congenial and peaceful working atmosphere to its employees. In the process of ES implementation, the organizational and cultural elements were aligned to support the use of transaction data at Growel. The compensation system was also changed to reward sales people for sales volume and profit to include a fixed and variable component of pay. The company created a friendly atmosphere within the organization which fostered orientation to change. The organization also adopted a dataoriented culture and encouraged employees to use data to support any business decision.
context: skills and Knowledge Growel has always had a high quality, motivated work force. The company employs about two thousand workmen of which over four hundred are engineers with a high ability to learn and implement modern manufacturing methods using high tech equipment. The company provides extensive training both in house and externally, including overseas exposure. Within the group of knowledge workers and analysts, the skills include detailed knowledge of the organization’s underlying business processes. They possess extensive skills for interpreting the SAP data, including understanding how key elements relate and their
context: Data
context: technology Growel had historically been using a homegrown legacy system which provided disparate information which lacked proper integration and utilization. However, this lack of operational data to support decision making changed with the implementation of SAP’s R/3 in 2000-01. The modules implemented were finance, sales and distribution, materials management, production management, and human resources. The company now had unprecedented visibility into its operations and customer base. SAP business intelligence tools were extensively used to extract, analyze, and develop adhoc reports.
transformation The transformation process at Growel was a result of putting knowledge-leveraging activities into action. How this happened is explained next. The value creation process was initially described in detail to gain an in-depth understanding of where and how Growel adds value for its customers. A critical success factor framework for each functional area was developed and a strong linkage between departmental performance indicators and top-level metrics for gauging the effectiveness of company strategy was put in place. Task
1237
The Impact of Enterprise Systems on Business Value
groups within each functional area translated the general framework into team-specific programs to leverage innovations for achieving strategic goals and plans. The model was shared with all the relevant team players. Descriptive indicators of the improvement and corrective-action plans were identified to facilitate decision making. The implementation plans for these decisions to achieve the desired results along with the steps to create the results were identified. Forms were designed to describe these plans and their measures. The indicators were documented choosing the reference for benchmarking and external validation along the time-line. The analytical process was the means by which ES data became knowledge. The SAP data, required for each of the indicators were identified, extracted, and interpreted, to create useful information for monitoring the progress for achieving the objectives. The signals and messages coming from each indicator were analyzed and evaluated to support decision-making. The decision-making process was based on high-quality, well-analyzed ES data on a multitude of factors. Some key areas where ES data were utilized for improvements were customer and product profitability analysis, price/volume analysis, market and customer segment analysis, sales forecasting and operations planning analysis. All actions likely to improve the likelihood that the result will be coherent with the strategic intent were identified, evaluated, and implemented.
Outcomes: changing behaviors One of the major outcomes from the initiatives described above was changing behaviors. Improved information sharing, transparency, and openness with customers, suppliers, employees, and all stakeholders resulted in improved inter-personal and business relations. Having easy access to invoice and purchase order data enabled Growel to improve price synchronization with customers and suppliers. The earlier disputes on invoices
1238
and purchase orders relating to price and terms of business diminished. The online visibility of demand and supplies with customers and suppliers through the integrated supply chain management (SCM) system led to less volatility in sudden spurts of demand which existed earlier. This resulted in more streamlined supplies and a dramatic change in customer and supplier behavior since Growel could react better to change orders now and was able to be more flexible in the manufacturing environment.
Outcomes: New Initiatives The ability to analyze customer and product profitability lead to a new initiative of value engineering to improve or change the design to make the product more profitable. Unprofitable products and customers were identified and the division’s existing unprofitable product lines were replaced by more profitable new product lines. New initiatives towards implementing just-intime inventory systems were undertaken, which decreased inventory costs substantially.
Outcomes: Process changes Growel recognized that the SAP data created opportunities for redesigning some business processes which could create entirely new sets of decisions. Growel is moving at full speed to re-design some business processes and build ecommerce applications with SAP as a backbone for their legacy systems and other collaborative software like SCM (Supply Chain Management) and PLM (Product Lifecycle Management). SAP provides capabilities such as CRP (Capacity Resource Planning), and BPR (Business Process Re-engineering) which offer powerful links within the entire value chain from customers to suppliers. The company has set up an integrated supply chain management system which enables real-time visibility of material requirements and inventory throughout the value chain and provides
The Impact of Enterprise Systems on Business Value
decision support at all stages of operations. With a majority of the company’s suppliers receiving supply chain data, the company has a real-time total demand management system in place. A virtual private marketplace has been created for Growel through which the company engages in e-procurement and reverse auctions. The company has already started selling scrap online. In a development that will substantially reduce product development time and bring the company closer to its customer, Growel is in the process of implementing a Collaborative Product Commerce (CPC) module. CPC will enable the company to work online with its customers to design and develop products and share information and knowledge with the customers. This will reduce product development time and costs and, more importantly, forge close ties with the customers from early stages of product development.
cAsE stUDy 2 company Overview Primemover2 is a multi-faceted engineering enterprise. Established in 1859, Primemover is one of India’s leading and well-diversified engineering companies with a US$500 million turnover. The company’s core competencies are in diesel/petrol engines, power generating sets (gensets), agricultural and construction equipment. The business operations of the company are divided into various business groups strategically structured to ensure maximum focus on each business area and yet retain a unique synergy in the operations. The business groups are power generation, agricultural equipment, light engines, and infrastructure equipment. Primemover has six manufacturing plants located at several locations in India. The company has an extensive sales and service network manned by a highly skilled and dedicated workforce. The power generation group of Primemover
designs and manufactures diesel engines and gensets for industry sectors such as power, oil and gas, construction, earthmoving, and transportation and supplies to various countries all over the world. The unit employs about 1,200 people. Originally Primemover only manufactured diesel engines. In 1993 the company collaborated with a German company to include gensets in order to extend its range of products, to develop a more complete supply capability to the engine manufacturing industry, and to position the company for long-term growth. Both product ranges served as solution providers for their customers, provided products with a large-range of kva and hp ratings. The synergies created by the integration of the two product ranges enabled better designs and product offerings. Specifically, customers had more effective, one-stop access to a comprehensive range of products as well as simplified commercial relationships. By enabling improved service to customers, the integration of the two product lines enhanced the market position of the power generation group of Primemover. However, the company realized that it would face challenges to ensure its profitable growth in the long term.
context: strategic Primemover was one of the few companies that were full-line supplier of the entire product range in industry markets however; the company was facing growing challenges. Despite increased demand resulting from a healthy growing market, greater competition in its core markets and high operating costs could inhibit the achievement of financial returns expected from its operations. Primemover had to overcome the competition, and leverage the new opportunities to ensure profitable growth for the future. To compete effectively, Primemover needed to improve its supply chain performance and cost; the business processes of the two product lines were not performing at the levels necessary to grow profitably in the emerging competitive environment. The time frames required to commit
1239
The Impact of Enterprise Systems on Business Value
to delivery of finished goods to customers were not competitive; customer order handling and service processes were complex; operations were not sufficiently flexible to enable rapid response to shifting demand; market share was not growing rapidly; and inventory-carrying costs and other expenses inhibited achievement of adequate financial returns. These challenges were compounded by the fact that 60% of Primemover diesel engine and genset parts were externally sourced and there were long lead times for products such as pistons from Germany and turbochargers from UK. Forecast accuracy was low, inventory data and related information were inaccurate. Thus, key goals for Primemover included enhancing its ability to respond to customer requirements, improving market share, and reducing costs throughout the operation. Primemover determined that it must improve its supply chain planning and execution capabilities to improve its operations in order to achieve better penetration capability into the competitive market, which was their prime objective.
on-going training and development to upgrade the skills and knowledge of its employees including knowledge of the organization’s underlying business processes.
context: Organizational and cultural
Primemover realized a need to couple its execution systems with a new class of supply chain planning software to address its requirements in the future. After considering various software solutions and determining the business strategy, the company selected SAP R/3 to be the foundation of the planning system for its integrated operations. Primemover had historically been using an internally-developed system which provided disparate information that lacked proper integration and utilization. However, the situation changed with the implementation of SAP’s R/3 in 200001. The company implemented financial, sales and distribution, production management, human resources along with Web-enabled supply chain management systems. Primemover now had better visibility into its operations and customer base. Standard reports from SAP and Microsoft’s Excel were used to analyze and report data.
Primemover has an overall employee base of about three thousand employees and has a changeoriented organizational culture. Primemover enjoys the enviable reputation of being one of the few corporates that has successfully maintained harmonious human relations year after year and the credit for this goes to all its employees who have always been open to and welcomed change. The organization structure is collaborative and the compensation system is aligned to achieving goals.
context: skills and Knowledge Out of the three thousand workmen Primemover employs, over five hundred are qualified engineers having technical and commercial expertise. The company has hired mostly skilled workmen in the last fifteen years. The company has been providing
1240
context: Data Quality of data was not an area of focus for Primemover prior to ES implementation. The maintenance of data records was not consistent and discrepancies were often encountered in the data records. There was a lack of discipline in updating transactions in the warehouse which would lead to stock and other data discrepancies. This lack of accuracy and currency of information led to data integrity issues amongst employees. The tools for data extraction were also inadequate. Availability of transactional data and information was an issue. Data extracts could not be made easily available and in time to support decision making.
context: technology
The Impact of Enterprise Systems on Business Value
transformation The transformation process at Primemover was based on the information from the SAP system. The available SAP data were utilized, interpreted, and analyzed in various areas of operations to support decision-making for achieving the company’s strategic objectives. Standard reports from the SAP system were used and evaluated on a regular basis to support decision-making. Many of these reports were transformed into Excel spreadsheets to facilitate application of analytical processes. The decision-making process was based on wellanalyzed ES data on a multitude of factors. However the organization’s business strategy needed more alignment into departmental and divisional strategies. The definition of information critical to the success of the organization was missing. This was evident by a remark from one of the interviewees when he said “there was a lot of confusion on what is to be achieved, which data needs to be analysed, where it is to be applied and to achieve what results”. The analytical and decision-making process was based on ES data which were not very accurate. There was a lack of timely availability of data extracts to support analytical decision making and the link between data, decisions, and actions was missing. Some key areas where ES data were utilized for improvements were market and sales forecasting and operations planning analysis, theory of constraints in manufacturing analysis, raw material cost analysis, inventory analysis, activity-based costing, and process efficiency analysis.
Outcomes: changing behaviors As a result of implementing an ES, the attitude of customers, suppliers, employees, and all stakeholders improved but did not reach the levels expected since inaccuracies were still present in the historical data. The online visibility of demand and supplies with customers and suppliers through the integrated SCM system reduced the number
of out-of-inventory events, but did not achieve the reduction and streamlining of inventories anticipated. The lead times could not be reduced substantially and delivery performance improved marginally. The supply issues from vendors and customers continued and dramatic overall improvement in behaviors was not achieved.
Outcomes: New Initiatives With some ability to analyze demand and with the information now visible, the company implemented the following changes to its supply chain processes. Primemover analyzed its various market and customer segments, determined the service requirements for these segments, and classified products according to volume and variability of demand. Primemover established monthly salesforecasting and operations-planning meetings where the supply-and-demand plans were formally reviewed; performance of locally produced and imported product was considered; exceptions and unique requirements were brought up for discussion; and market intelligence and longerterm constraints in material and production were factored into plans.
Outcomes: Process changes In order to shorten lead times, reduce inventory, and increase throughput, Primemover employed a “theory of constraints” model, establishing processes to identify critical material and capacity constraints – and to optimize these constraints in its manufacturing operations on an ongoing basis. As operations of the two product lines were unified, customer account numbers were merged; SKUs were combined; and the number of warehouses was reduced from five to three. These consolidations required considerable revamping of numbering schemes and physical storage strategies. Warehouse consolidation was coupled with improved policies associated with material movement, virtually eliminating the need to transfer material
1241
The Impact of Enterprise Systems on Business Value
between locations once received. Due to better tracking of material and a substantial reduction of the need to move material multiple times, costs for damaged and lost material declined from 1.60% to 0.78% of sales. Also, strategically significant and measurable improvements to inventory, cost, and customer service were achieved. Forecast accuracy improved as the company gained experience with sales forecasting and operations planning. The new processes also gave visibility into material requirements for scheduled orders and further facilitated procurement planning. Primemover had earlier been placing large, irregular orders monthly or bimonthly with its suppliers but is now placing regular weekly orders, improving its suppliers’ abilities to plan and thereby improving Primemover’s negotiating position. The store manager at Primemover remarked that the suppliers and customers were now giving “fewer headaches” with the new system. Primemover’s customers are also able to plan better due to the company’s improved forecasts. Resolution of manufacturing constraints has improved production throughput by 20%. Daily cycle counting has enabled inventory accuracy to advance from 52% to more than 80%, which has facilitated a reduction in the levels of raw materials and finished goods. Finally, many employees are undertaking what-if analyses with the range of available data, further improving planning and execution across the supply chain. Primemover has chosen to integrate related product lines, establish supply chain channels to match supplyand-demand streams, and resolve constraints in its manufacturing network. The company recognizes that its process designs are not static and that its business and enabling systems will continue to evolve in line with market demands. Ongoing achievements in cost reductions and customer service improvements have enabled Primemover to make continual improvements in market position, premium pricing opportunities, and financial return.
1242
rEsULts AND FINANcIAL IMPActs Growel has surpassed the last full year’s total revenue and exports in the first nine months. Today, Growel has achieved the distinguished position as the largest manufacturer of axle components for heavy trucks world wide and one of the key global players for engine components. Primemover has improved over last year’s total revenue and exports (as shown in table 2). However, Primemover has not fully achieved the anticipated objectives and results. Primemover had an expectation that the inventory levels would reduce by 60-70% and cost efficiency would also improve by 40-50% however, reduction in inventory levels achieved was only 40% and improvement in cost efficiency also was only 20%. Another expectation of Primemover was that the human resources would become highly motivated as an end result but the motivation expected was not seen. Although, improvement in the areas of on-time delivery and time taken for new product development, was achieved as anticipated. On the financial side, improvement expectation in the bottom line, profit after tax, was 20% by both Growel and Primemover. An improvement of only 9% was finally achieved by Primemover whereas Growel achieved 26%.
DIscUssION These cases illustrated Davenport’s conceptual framework (Figure 1) of how business benefits are achieved from ES data transformation into ES knowledge highlighting the effectiveness of enterprise systems in the two organizations. The overall business benefits including the financial results obtained in Growel’s case surpassed those of Primemover. The reasons attributed to this are given in the following section. Growel and Primemover both had a business strategy and had identified their business objectives however; the business strategy in Growel’s case was clearly articulated and aligned. Growel
The Impact of Enterprise Systems on Business Value
Table 1. Key successes achieved by Growel and Primemover Growel
Primemover
1.Product development time - Decreased to around three weeks as compared to six months earlier; this closely matches industry benchmarks
1.Product development time - Decreased to around two months compared to three months earlier; this closely matches industry benchmarks
2.On-time delivery - Increased to 98% on-time
2.On-time delivery - Increased to 85% on-time
3. Inventory - Reduced by 80%
3. Inventory - Reduced by 40%
4. Cost efficiency - Increased by 60%
4. Cost efficiency - Increased by 20%
5. Relationship management - Much improved relations with customers and suppliers
5. Relationship management - Some improved relations with customers and suppliers
6. Human resources - Highly motivated
6. Human resources – Motivated
Table 2. Financial outcomes of Growel and Primemover as on March 31, 2005 Growel
Primemover
1.Total revenues - Increased by 47%
1.Total revenues - Increased by 24%
2.Exports - Increased by 76%; contributes 48% of total revenue
2.Exports - Increased by 15%; contributes 8% of total revenue
3. Profit before tax - Increased by 37%
3. Profit before tax - Increased by 15%
4. Profit after tax - Increased by 26%
4. Profit after tax - Increased by 9%
worked out their value creation process and identified the critical areas that required attention and improvement. They understood the key drivers and had the means to influence those drivers and measure them. Growel was able to translate their business strategy into departmental or divisional strategies. They knew what was to be achieved, which data needed analysis, its area of application, and expected outcomes. Primemover had identified their key business objectives however, the definition of information critical to the success of the organization was lacking. Primemover could not create the link between departmental performance indicators and top-level metrics for gauging the effectiveness of the company strategy. Managers had data from their ES investment but could not leverage it to maximize realization of benefits and achieve their business strategies. Growel understood the complexity of problems requiring analytic support. More complex issues, requiring sophisticated modeling and data analysis, are better served when analysts and decision makers are closely linked which was the case in
Growel. The quality of management reviews also improved at Growel because executives became much more reliant on numbers in explaining their performance. Issues of data quality did not exist at Growel. Monitoring and updating of data was a regular feature and the transaction data were made available in a timely fashion to support decision making. In the case of Primemover, the lack of discipline in updating transactions in the warehouse led to data discrepancies and data integrity issues amongst employees. Data extracts could not be made in time to support decision making. The analytical and decision-making process was based on ES data which were unclean and inaccurate and the link between data, decisions, and actions was lacking. Growel had streamlined their supply chain and achieved a major change in customer and supplier behavior since Growel could react better to change orders and was able to be more flexible in the manufacturing environment. However, in the case of Primemover the online visibility of demand and supplies with customers and suppliers 1243
The Impact of Enterprise Systems on Business Value
reduced the number of out-of-inventory events, but did not achieve the reduction and streamlining of inventories anticipated. The lead times could not be reduced substantially and delivery performance improved marginally. Their data management was inconsistent coupled with a lack of clarity on the information required to drive their strategy which were the reasons they could not achieve their full potential and anticipated objectives.
cONcLUsION This study has examined the effectiveness of enterprise systems on organizational functions and processes for realizing business value. It has highlighted that business results follow in a culture where the business strategy is clearly articulated and defined. The organization works out their value creation process identifying the critical areas that require attention and improvement. They understand the key drivers and have the means to influence those drivers and measure them. They translate their business strategy into departmental or divisional strategies, know what is to be achieved, which data needed analysis, its area of application, and expected outcomes. The organization has a culture that supports decision makers who have the definition of the information critical to the success of the enterprise and the means to achieve it by linking data, decisions, and actions. And, for achieving all of this, the organization must possess the necessary expertise and skills in the usability of ES and its information. Quality of data plays a vital role. In order to succeed in today’s competitive world, businesses must shift their focus from improving efficiencies to increasing effectiveness. Integrated access to pertinent information captured by ES must be available so that effective decisions can be made towards successfully implementing strategies, optimizing business performance, and adding value for customers. Knowledge is a key
1244
factor in this process. Success or failure is often attributed to enterprise systems or their implementation process. However, it is evident from this study that enterprise systems provide a platform of functionalities and information to an organization. The ability of an organization to extract value from data, distribute results from analysis, apply knowledge, and establish decisions for strategic organizational benefits will lead the path towards business success which would eventually emerge from the process of ongoing transformations over a period of time.
rEFErENcEs Baskerville, R., Pawlowski, S., & McLean, E. (2000). Enterprise resource planning and organizational knowledge: patterns of convergence and divergence. Paper presented at the International Conference on Information Systems, Proceedings of the 21st International Conference on Information Systems, Brisbane, Australia, p. 396-406. (2002, August). Bendoly. (2002). Theory and Support for Process Frameworks of Knowledge Discovery and Data Mining from ERP Systems. Information & Management, 40(7). Cooke, D. P., & Peterson, W. J. (1998). SAP implementation: Strategies and Results (No. 1217-98-RR). New York. Dalal, N. P., Kamath, M., Kolarik, W. J., & Sivaraman, E. (2004). Toward an integrated approach for modeling Enterprise processes. Communications of the ACM, 47, 83–87. doi:10.1145/971617.971620 Davenport, T. H. (1995). SAP: Big Change Comes in Big Packages, from http://www.cio.com/archive/101595_davenpor.html Davenport, T. H. (1998). Putting the enterprise into the enterprise system. Sloan Management Review, (July-August): 121–131.
The Impact of Enterprise Systems on Business Value
Davenport, T. H. (2000). Transforming the Practice of Management with Enterprise Systems. In Mission Critical (p. 203-235). Boston: Harvard Business School Press. Davenport, T. H., Harris, J. G., & Cantrell, S. (2002). The Return of Enterprise Systems: The Director’s Cut: Accenture Institute for Strategic Change. Davenport, T. H., & Prusak, L. (1998). Working Knowledge. Boston, Harvard Business School Press. Deloitte, C. (1998). ERP’s Second Wave-Maximizing the Value of ERP-Enabled Processes. New York: Deloitte Consulting, ISBN 1-892383-42-X. Donovan, M. (1998). There is no magic in ERP software: It’s in preparation of the process and people. Retrieved September 8, 2/7/2002, from http://wwwrmdonovan.com/pdf/perfor_98_9.pdf
Markus, M., & Tanis, C. (2000). The Enterprise Systems Experience - From Adoption to Success. In R. W. Zmud (Ed.), In Framing the Domains of IT Research Glimpsing the Future Through the Past (pp. 173-207). Cincinnati: Pinnaflex Educational Resources, Cincinnati, USA. Newell, S., Huang, J. C., Galliers, R. D., & Pan, S. L. (2003). Implementing enterprise resource planning and knowledge management systems in tandem: fostering efficiency and innovation complementarity. Information and Organization, 13, 25–52. doi:10.1016/S1471-7727(02)00007-6 Shang, S., & Seddon, P. (2000, August 10-13). A Comprehensive Framework for Classifying the Benefits of ERP Systems. Paper presented at the 6th America’s Conference on Information Systems, Long Beach, California.
Donovan, M. (2001). Successful ERP Implementation the first time. Retrieved July 25, 2001, from www.mdonovan.com/pdf/perfor8.pdf
Yang, S., & Seddon, P. B. (2004). Benefits and Key Success Factors from Enterprise Systems Implementations: Lessons from Sapphire 2003. Paper presented at the 35th Australasian Conference in Information Systems, Hobart, Australia.
Donovan, M. (2003). Why the Controversy over ROI from ERP? from www.refresher.com/archives19.html
KEy tErMs AND DEFINItIONs
Hedman, J., & Borell, A. (2002). The impact of Enterprise Resource Planning Systems on Organizational Effectiveness: An Artifact Evaluation. In F. F.-H. Nah (Ed.), Enterprise Resource Planning Solutions & Management (p. 125-142). Hershey, London: IRM Press. Ittner, C. D., & Larcker, D. F. (2003). Coming Up Short on Nonfinancial Performance Measurement. Harvard Business Review, 81(11), 88–95. Jarra, Y. F., Al-Mudimigh, A., & Zairi, M. (2000). ERP Implementation Critical Success Factors The Role and Impact of Business Process Management. IEEE & ICMIT, 02/2000, 122–127.
Business Process Re-Engineering (BPR): Rethinking and redesign of business processes to achieve performance improvements in terms of overall cost, quality and service of the business Conceptual Framework: A basic conceptual structure built from a set of concepts to outline possible courses of action or to present a preferred approach to solve a complex research problem. Customer Relationship Management (CRM): Software systems that help companies to acquire knowledge about customers and deploy strategic information systems to optimize revenue, profitability and customer satisfaction Enterprise Resource Planning (ERP): Software systems for business management that integrates functional areas such as planning,
1245
The Impact of Enterprise Systems on Business Value
manufacturing, sales, marketing, distribution, accounting, finances, human resource management, project management, inventory management, service and maintenance, transportation, and e-business Knowledge Management (KM): The creation, organization, sharing, and flow of knowledge within and among organizations Return on Investment (ROI): A performance measurement used to evaluate the efficiency of an investment. ROI is calculated as the annual financial benefit (return) after an investment minus the cost of the investment divided by the cost of the investment the result being expressed as a percentage or a ratio.
Supply Chain Management (SCM): Software systems for procurement of materials, transformation of the materials into products, and distribution of products to customers, allowing the enterprise to anticipate demand and deliver the right product to the right place at the right time at the lowest possible cost to satisfy its customers
ENDNOtEs 1
2
A pseudonym. The name was chosen to symbolize growth. A pseudonym. The name was chosen to symbolize power.
This work was previously published in Handbook of Research on Enterprise Systems, edited by Jatinder N. D. Gupta, Sushil Sharma and Mohammad A. Rashid, pp. 119-133, copyright 2009 by Information Science Reference (an imprint of IGI Global).
1246
1247
Chapter 5.5
People-Oriented Enterprise Information Systems Giorgio Bruno Politecnico di Torino, Italy
AbstrAct
INtrODUctION
Current notations and languages do not emphasize the participation of users in business processes and consider them essentially as service providers. Moreover, they follow a centralized approach as all the interactions originate from or end in a business process; direct interactions between users cannot be represented. What is missing from this approach is that human work is cooperative and cooperation takes place through structured interactions called conversations; the notion of conversation is at the center of the language/action perspective. However, the problem of effectively integrating conversations and business processes is still open and this chapter proposes a notation called POBPN (People-Oriented Business Process Notation) and a perspective, referred to as conversation-oriented perspective, for its solution.
Enterprise Information Systems (EISs) were first conceived as systems providing a repository for business entities and enabling users (subdivided into appropriate roles) to handle such entities. Then the process-oriented approach (Dumas, van der Aalst, & ter Hofstede, 2005) has pointed out that business purposes are achieved through coordinated work to be carried out by means of two kinds of activities: user tasks and automatic procedures. User tasks are units of work that users carry out with the help of a graphical interface in order to achieve a particular purpose. Placing a purchase requisition or filling in the review form for a conference paper are examples of user tasks. Automatic procedures, instead, accomplish their function without requiring any human intervention.
Copyright © 2011, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
People-Oriented Enterprise Information Systems
However, as the activities are developed separately from the processes, current notations and languages, such as BPMN (Object Management Group, 2008), XPDL (Workflow Management Coalition, 2007), and BPEL (OASIS, 2007), consider business processes essentially as orchestrators of work, which accomplish this function by means of interactions with external services; it makes little difference if the interaction is directed to a user task or an automatic procedure. This approach, referred to as orchestration-oriented perspective, makes the representation more homogeneous but at the expense of handling human activities as a special case of automatic procedures. If people are involved, they are considered as service providers. Moreover, each interaction that logically takes place between two users, say, A and B, such as A asking B for the approval of a certain request, is mediated by the process and therefore it results in two sequential actual interactions, the first between A and the process and the second between the process and B. What is missing from the orchestrationoriented perspective is that, in most cases, human work is cooperative and therefore human activities are not performed in isolation but in structured frameworks referred to as conversations (Winograd & Flores, 1986). Conversations are the basis of the language/action perspective (Weigand, 2006), or LAP, which was proposed for the design of information systems and business processes. According to LAP, EISs are mainly tools for coordination. An example of conversation is the one governing the approval of a purchase requisition (PR). It takes place between two users, denoted as requester and approver: the former is the initiator and the latter is the follower of the conversation. The requester starts the conversation by sending a PR to the approver, who evaluates the PR and sends it back to the requester with three alternative outcomes, that is, “accepted,” “rejected,” or “to be revised.” In the first two cases, the conversation ends, in the third one the conversation continues
1248
and the requester may withdraw the PR or submit a revised version. In the latter case, the approver is expected to re-evaluate the revised version and to provide the final outcome (“accepted” or “rejected”). If user tasks are abstracted away, a conversation consists of a number of interactions organized in a control flow specifying their sequence. Interactions are defined in terms of three major attributes: name (embodying the interaction meaning, for example “request,” “acceptance,” or “rejection”), direction (i.e., initiator-follower or followerinitiator) and the business entity conveyed by the interaction (such as a PR). If the business entities are left unspecified, what remains is the flow that characterizes a particular kind of conversation. This flow, referred to as conversation protocol, is a template that can be specialized by providing the actual types of the business entities exchanged. Several modeling approaches based on LAP have been proposed (a short survey is given in the next section); however, they mainly focus on the nature of the underlying interactions and do not provide an adequate solution to the integration of conversations and business processes. This integration is a major purpose of this chapter, which proposes a notation called POBPN (People-Oriented Business Process Notation) and a perspective referred to as conversation-oriented perspective. In POBPN, the top-level representation of a business process clearly separates the contribution of the users involved (in terms of their roles) from that of the automatic activities. The roles participating in the process appear as building blocks and their refinement (in second-level models) is given as a combination of the conversations they are involved in. Such conversations are actualizations of standard conversation protocols obtained by providing the actual types of the business entities exchanged during the interactions. These types come from an information model that complements the process model: this way the behavioral aspects embodied in a business process are inte-
People-Oriented Enterprise Information Systems
grated with the informational aspects provided by the companion information model. The automatic activities are meant to support the conversations for which the process itself plays the initiator role or the follower one; in the top-level representation of a business process, they are represented by a specific building block, called System. The RAD approach (Ould, 2005) is based on interacting roles as well, but these roles encompass tasks and control flow activities, while, in POBPN, roles are based on conversations. An extension of RAD towards LAP has been proposed (Beeson & Green, 2003): the basic suggestion is to introduce a middle-level construct in roles, with the purpose of grouping the tasks that are logically involved in a conversation. This chapter is organized as follows. Section 2 discusses the major limitations of the orchestration-oriented perspective and reports on current research on conversations. Section 3 introduces conversation protocols, whereas sections 4 and 5 illustrate the proposed notation, POBPN, with the help of an example. Section 6 focuses on the differences between the conversation-oriented perspective and the orchestration-oriented one. Section 7 presents the conclusion and the future work.
bAcKGrOUND Business processes can be addressed with different orchestration-oriented notations, such as BPMN (Object Management Group, 2008) and UML activity diagrams (Object Management Group, 2007); however, they all have a number of features in common, as follows. They place great emphasis on the control flow of business processes, under the pressure of research on workflow patterns (van der Aalst, ter Hofstede, Kiepuszewski, & Barros, 2003); in contrast, they overlook the information flow (although information items can be included for documentation purposes) and tend to incorporate user tasks as execution steps in the process structures.
In particular, BPMN provides building blocks to represent the actual user tasks. Such building blocks can be placed in swim lanes associated with the roles needed; from an operational point of view, they enable the process to activate a user task, by sending it input information along with the indication of the actual performer(s) required, and then to wait for output information, which notifies the completion of the user task. This approach, being an attempt at incorporating user tasks in the process structure, does not cope with all the situations taking place in practical applications. It only covers one interaction pattern between a process and a user task, that is, the one in which the user task receives one input message (on its activation) and delivers one output message (on its completion); this pattern is referred to as (1, 1). There are three more task interaction patterns, as follows. Pattern (1, *) indicates that the task sends a number of intermediate output messages before the completion one. These additional output messages signify that the task has produced information items requiring immediate attention. For example, task “reviewPapers” is started on the arrival of a folder of papers to be reviewed and allows the reviewer to release the reviews one by one. Pattern (*, 1) indicates that the task may receive additional input messages, after its activation. For example, task “assignPapers” is activated with an initial group of papers and then it may receive additional papers, before its completion. The fourth pattern (*, *) denotes both a flow of input messages and a flow of output ones. For example, task “evaluatePapers” receives a flow of reviews and provides a flow of papers evaluated, each of which will trigger an immediate notification to its author. To overcome the above-mentioned limitations, the process, instead of incorporating the user tasks in its control flow, should emphasize the underlying interactions taking place with them. The notion of interaction is important because it leads to a clear separation between processes and
1249
People-Oriented Enterprise Information Systems
user tasks; however, interactions are part of larger structures encompassing all those exchanged between two parties for a common purpose. At this point, the notion of conversation comes into play. Winograd and Flores (1986) introduced the term “conversation for action” to indicate the kind of conversation aimed at producing an effect on the real world through the cooperation of two parties. Taking advantage of the theory of speech acts (Austin, 1976), which studies how people act through language, they proposed a novel perspective, referred to as language/action perspective (LAP), for the design of information systems and business processes. According to LAP (Weigand, 2006), information systems are mainly tools for coordination. Conversations may be carried out for different purposes: Winograd (1987) mentions conversations for clarification, for possibilities and for orientation in addition to conversations for action. Several modeling approaches based on LAP have been proposed, among which stand out Action Workflow, DEMO, and BAT. In the Action Workflow approach (MedinaMora, Winograd, Flores, & Flores, 1992), a typical conversation takes place between a requester and a performer and is made up of four major phases, that is, request, commitment, performance, and evaluation, forming the so-called workflow loop. The first two phases establish a commitment between the parties and the last two phases bring the parties to an agreement on the result. The actual workflow might be complicated because negotiations can be made in each phase. In the DEMO approach (Dietz, 2006), workflow loops are subdivided in three phases, that is, order, execution, and result, and are encapsulated in transactions. Business processes are compositions of roles and transactions, where transactions connect two transactional roles. Transactional roles are compound entities including transactional tasks (i.e., the tasks being part of the transactions the role is involved in) and the
1250
control-flow logic handling their ordering. In the hierarchical structure provided by DEMO, tasks are compared to atoms, transactions to molecules, and business processes to fibers (Dietz, 2003). Transactional roles might not have a one-to-one mapping to organizational roles, because the tasks included in a transactional role might be assigned to different organizational roles: this is needed to support delegation (i.e., the situation in which the actor who receives the order is not the one who delivers the result) but could make the model more difficult to understand. BAT (Goldkuhl & Lind, 2004) draws upon the ideas of Action Workflow and DEMO and addresses business interactions in an inter-organizational framework. In particular, it analyzes the relationships between a supplier and a customer in a number of situations including single business transactions and frame contracting (with embedded transactions). Although the above-mentioned LAP approaches yield a deeper understanding of the nature of interactions, they do not provide an operational solution to the integration between business processes and user tasks through conversations. This integration is pursued by the conversationoriented perspective illustrated in the next sections along with the related notation, POBPN.
cONVErsAtION PrOtOcOLs This section illustrates the notion of conversation protocol and shows the notation adopted in POBPN, with the help of a number of examples to be reused in the next sections. A conversation implies a flow of interactions between the parties, an interaction being the communication of one party’s intention to the other party. Intentions can be categorized and identified by specific symbols, as illustrated later on in this section. Examples of intentions are making a request, accepting a request, and providing a
People-Oriented Enterprise Information Systems
Figure 1. Examples of conversation protocols r, T alt
r+, T r-, T
r, T1 alt
r-, T1
r~, T alt
alt
opt
r+, T
opt ~rep, T2 rep~, T2
~r, T r~, T alt n, T
r-, T
ApprovalCP
NotificationCP
rep, T2
-r, T ~r, T
n, T
BasicCfACP
reply. An interaction conveys the business entity that is the object of the intention. This business entity is referred to as the business content of the interaction. Conversations can be categorized on the basis of their purpose: for example, there are conversations for approval and conversations for action. If the details of the business content are ignored, all the conversations with the same purpose turn out to have the same interaction flow, and the structure of this flow, called conversation protocol, is the major concern of this section. A conversation protocol is an abstraction as it is defined independently of the types of the business entities exchanged by the parties; it is a template in which such types are given generic names. When a protocol is actualized, generic types will be replaced with actual types: as will be shown in the next section, an actualized protocol relies on an information model providing the definition of the types of the business entities involved. State models are often used (Winograd, 1987) to define conversation protocols graphically; POBPN, instead, adopts UML sequence diagrams (Object Management Group, 2007), because they focus on the essential elements, that is, the
Notification1CP
interactions, while providing a simple hierarchical structuring mechanism based on fragments. Four examples of protocols are shown in Figure 1. As a general rule, the lifelines of the two actors involved in the conversation are not shown; however, the initiator is meant to be located on the left side of the diagram and the follower on the right side. The first interaction of a conversation is called initial interaction and the business entity that it conveys is referred to as initial business entity. Timing constraints are not considered so as to keep the examples reasonably simple. The first protocol in Figure 1 addresses conversations for approval and is called “Approval conversation protocol” (ApprovalCP in short). These conversations start with a request, indicated by symbol “r”; the business content, that is, the entity for which the approval is asked, is denoted by a generic type, such as “T.” The follower can accept the request (r+), reject it (r-), or ask for some improvement (r~). The three alternatives are enclosed in a fragment marked with keyword “alt.” In the first two cases, the conversation is ended; in the third one, the initiator can then withdraw the request (-r) thus ending the conversation or he or she can submit a revised version (~r), which will be followed by an acceptance (r+) or a rejection (r-).
1251
People-Oriented Enterprise Information Systems
There is only one generic type involved, that is T, because the business entities exchanged during this kind of conversation are meant to be all of the same type. Protocol BasicCfACP defines simple conversations for action. As in the first example, the conversation is started with a request (r) coming from the initiator. The follower may reject the request (r-) or may provide a reply. The interaction conveying the reply is indicated with a specific keyword, “rep.” It is accompanied by a different business entity, denoted by generic type T2, because the purpose of a conversation for action is to make the follower deliver a new entity as the result of the action caused by the initiator’s request. Upon receiving the reply, the initiator may then ask for a clarification (~rep) to be followed by a clarification (rep~) after which the conversation is ended. The sequence of interactions ~rep and rep~ is optional and, therefore, it is enclosed in a fragment marked with keyword “opt”. The protocol of conversations for action can become more complicated (Winograd, 1987), if counter-offers are included. Protocol NotificationCP consists of one interaction (n), interpreted as a notification (i.e., a simple communication requiring no reply). Protocol NotificationCP1 ends with a notification, which may be preceded by an optional sequence consisting of a request for clarification (~r) followed by a clarification (r~).
PEOPLE-OrIENtED bUsINEss PrOcEssEs As mentioned in the introduction, the conversation-oriented perspective places great emphasis on the participation of users in business processes, and the notation proposed, POBPN, aims at providing an effective integration between conversations and business processes. To clarify the features of POBPN, an example concerning a business process handling purchase requisitions (PRs) is
1252
worked on in this section and in the following ones. The process is referred to as PRBP and its requirements are as follows. PRs are submitted by users entitled to do so; such users play the Requester role with respect to process PRBP. When a PR is submitted, it may be automatically approved or it may be sent for evaluation to the appropriate supervisor. The first case occurs when the amount of the PR is less than the personal expenditure threshold associated with the requester; in this case, the PR is considered to be an “inexpensive” one. When the second case occurs, the (expensive) PR is directed to the requester’s boss (who becomes the supervisor involved). The requester is informed of the result (acceptance or rejection). The supervisor may also ask for some improvement and the requester may provide a revised version of the PR or may withdraw it. The PRs approved are sent to the purchasing department where they are handled by a number of equivalent buyers. Buyers are assumed to pick pending PRs from a common queue and to include them in purchase orders. In case a buyer has some doubts, he or she can ask the PR requester to make the necessary clarification; when a buyer has included the PR in a purchase order, he or she sends a confirmation to the PR requester. Sometimes supervisors may ask one or more reviewers to provide third-party reviews of PRs considered to be critical. How this is done can be subjected to different rules: in this case study, the conversation related to a single review is expected to conform to conversation protocol BasicCfACP shown in Figure 1. In addition, if a supervisor has started a review conversation for a certain PR, he or she must wait for its conclusion before starting a new review conversation or taking the final decision, for the same PR. The analysis of the requirements above leads to the discovery that five roles are involved. Four of them are user roles: Requester, Supervisor, Reviewer, and Buyer. The fifth role involved, the System role, is responsible for all the activi-
People-Oriented Enterprise Information Systems
ties to be performed automatically (such as the approval of inexpensive PRs). These roles take part in several conversations, as follows. For the approval of a PR, its requester interacts with either his/her supervisor or the System; to get support for a critical PR, a supervisor may interact with a number of reviewers; when processing a PR, a buyer interacts with the PR requester (at least to notify him/her that the PR has been included in a purchase order). In addition, since approved PRs need to be processed by the purchasing department, notifications are sent by the System to the buyers. The above-mentioned four conversations fit the patterns presented in the previous section; however, it is necessary to indicate which business entities are involved. An information model is then needed, which defines the business entity types along with their most relevant attributes and relationships. This information model is the companion of the process model for two major reasons: firstly, the actualization of conversation protocols into actual conversation types is carried out by replacing the generic types used in the protocols with the actual types provided in the information model. Secondly, in the process model there are a number of data-driven aspects, such as the selection of a conversation follower (to
be illustrated later on in this section), and they can be adequately expressed only if an information model is provided. The companion information model for process PRBP is shown in Figure 2 as a UML class model (Object Management Group, 2007). Only the major items are considered, for the sake of simplicity. Users are represented by classes marked with stereotype . Such classes are not meant to be the actual ones, because the decision on how to represent users in the information system is postponed to the software development phase; they can be considered as interfaces to the actual classes. The relationship between role Requester and role Supervisor is a consequence of the requirement that an expensive PR has to be approved by the requester’s boss who acts as the supervisor for the PR. Associative attribute “boss” enables the navigation from a requester entity to the corresponding supervisor entity. A PR is related to one requester entity (which can be reached through associative attribute requester) and possibly to one buyer entity (only in case the PR has been approved). A PR may also be linked to one purchase order (in case it has been approved), to a number of reviewers, as
Figure 2. The companion information model of process PRBP 1 Supervisor boss
POrder
0..1
*
Requester 1 requester *
+
PR *
Buyer
0..1
1 pr
*
Review *
* * reviewers
1 Reviewer
Attributes PR: float amount. Requester: float threshold.
1253
People-Oriented Enterprise Information Systems
well as to a number of reviews. Purchase Orders are represented by class POrder. A review is linked to the reviewer entity representing the reviewer who has provided it. Associative attribute “pr” provides the PR associated with a given review. Relationships PR-Review and PR-Reviewer are both needed in that a reviewer could refuse to provide the review he or she has been asked for. Associative attribute “reviewers” provides the list of Reviewer entities associated with a given PR. Two relevant attributes are also shown in Figure 2, because they are mentioned in the requirements of PRPB: they are the amount attribute in PR entities and the threshold (for expenditure) attribute in Requester entities. On the basis of the analysis made so far, the model of process PRBP can be worked out. Its toplevel portion is presented in Figure 3 and consists of three parts: the process diagram, the section defining conversation types and the one (Links) showing how the followers of the conversations will be determined.
The approach adopted in POBPN is to represent participating roles as the top-level building blocks (referred to as role participations or simply participations) of the process diagram and to show the conversations involved as ports, depicted as small squares on the participation icons. Each conversation gives rise to two ports, one in the initiator role (referred to as initiator port) and the other in the follower role (follower port). Such ports are given identical names but different colors; initiator ports are white, while follower ones are grey. Ports are connected through labeled links, whose details are shown in the Links section. The port names are names of conversation types. Section “Conversation types” declares conversation type “a” to be based on conversation protocol ApprovalCP with generic type T replaced with entity type PR, while conversation types “b,” “c,” and “v” are actualizations of protocols NotificationCP, Notification1CP and BasicCfACP, respectively. Section “Links” contains a number of link definitions, whose purpose is to show how the
Figure 3. The top-level representation of process PRBP Conversationt ypes: a: ApprovalCP (T = PR); b: NotificationCP (T = PR); c: Notification1CP (T = PR); v: BasicCfACP (T1 = PR, T2 = Review);
a
l1 l2
Requester
c
l5
a
Supervisor
v
a
System
b
Buyer
b
c
l3
l4
Links: l1 (PR pr): [pr.amount > pr.requester.threshold] pr.requester.boss. l2 (PR pr): [pr.amount pr.requester. threshold,” consist of two terms, “pr.amount” and “pr.requester.threshold.” The evaluation of the first term returns the amount of the current PR, and the evaluation of the second one gives the personal expenditure threshold of the PR requester: if the guard is true, the follower will be the requester’s boss. In most cases, the follower of a conversation is represented by a business entity related, through a certain path of associations, to the initial business entity: then, the follower clause is needed and it consists of a navigational expression indicating how to reach the follower business entity from the initial one. Two associative attributes are involved in the follower clause of l1, that is, requester and boss: in the information model shown in Figure 2, they determine a path from a PR to a Supervisor entity. In the definition of link l2, the follower clause is missing, because the follower role is System: when the guard of l2 is true, the conversation is followed by automatic activities and no supervisor is involved. The definition of link l4 is missing because both the guard and the follower clause are not
needed. In general, when the follower clause is missing and the follower role is a user role, such as Buyer in link l4, the assignment of a particular follower will take place at run-time; it is up to the members of the role to decide which of them will follow a new conversation, as it happens with buyers who autonomously pick pending PRs from a common pool. In general, the selection of the performer is a critical issue and several patterns have been proposed (Russell, van der Aalst, ter Hofstede, & Edmond, 2005). The follower clause in link l3 specifies that the reviewer to be involved is the last added to the collection of reviewers associated with the PR that started the conversation. Link l5 indicates that the follower is the requester of the current PR. Process diagrams are mainly architectural models, as they show the roles participating in the process and the conversations that may take place between them. Participation building blocks can be refined into second-level models, called participation models, and the next section is dedicated to their illustration.
PArtIcIPAtION MODELs Participation models define the involvement of the users (identified by their roles) or the contribution of the System, in the business process being considered. Those related to users are called user participation models and fulfill two major aims: firstly, by looking at them, users get an insight into the overall sequence of interactions they will be involved in; secondly, they are the basis of the implementation of the actual user tasks. The System participation model (there is at most one in a business process) is mainly meant to handle the automatic activities. Both types of models are described in this section and the examples given refer to process PRBP: its user participation models are shown in Figure 4.
1255
People-Oriented Enterprise Information Systems
Basically, user participation models are combinations of conversations. For this reason, their structure is like the one of conversation protocols with interactions replaced by references to conversation types. Such references are placed in constructs similar to the “InteractionUse” items available in UML sequence diagrams (Object Management Group, 2007); however, in POBPN they are called “ConversationUse” items, since their meaning is different. In these items, the conversation type name appears in a small rectangle at the upper left corner: the rectangle is white or grey depending on whether the current role is the initiator or the follower, respectively. The Reviewer participation model is based only on conversation “v,” and reviewers are followers of such conversations. The Buyer participation model is a simple sequence of conversations “b” and “c”; buyers are followers of conversations “b” and initiators of conversations “c.” The Requester participation model is based on conversation “a,” which, if it ends successfully (i.e., if it ends with r+), is followed by conversation “c.” In the optional fragment, operator opt is followed by an informal expression describing when conversation “c” is carried out. An important feature of user participation models is the ability of structuring conversations hierarchically, as shown in the Supervisor
participation model. According to the PRBP requirements presented in the previous section, a supervisor, during conversation “a,” and precisely after receiving a new PR or a revised one, may carry out a number of sequential conversations “v.” What is needed is the possibility of including a number of conversations “v” into a conversation “a.” The original conversations are not modified, but they are combined so as to define the overall behavior of the role. POBPN supports the nesting of ConversationUse items, as follows. ConversationUse items are not black boxes, but they can include one or more extensions. An extension is a partition made up of two parts: the first part shows the interaction(s) preceding the extension (referred to as extension points), and the second part contains the conversations (represented by their ConversationUse items) to be carried out before resuming the enclosing conversation. If the enclosed conversations are subjected to constraints, they are placed in appropriate fragments. The Supervisor participation model extends conversation “a”: the extension points are interactions “r” or “~r” (referring to the receipt of a new PR or of a revised one, respectively), which are followed by a loop fragment including conversation “v.” The loop operator introduces a sequence (possibly empty) of conversations “v.” The System participation model addresses those conversations for which the process itself
Figure 4. The user participation models in process PRBP Reviewer v
Buyer b c
RequesterS a opt: a ended with r+
upervisor a r, ~r loop v
c
1256
People-Oriented Enterprise Information Systems
Figure 5. The System participation model in process PRBP
a.r
a.r+
PR
Supervisor.a.r+
p1 t1
b.n
plays the initiator role or the follower one. In PRBP there are two such conversations: the process is the follower of conversations “a” for inexpensive PRs and the initiator of conversations “b.” The related interactions must be dealt with by means of automatic activities and, moreover, control-flow logic is needed to enforce ordering constraints between the activities. The notation used in System participation models is different from the one adopted in user participation models and follows an orchestrationoriented perspective. The orchestration-oriented notation used in POBPN is an extension of colored Petri nets (Kristensen, Christensen, & Jensen, 1998) and is called “interaction nets” to emphasize that its major purpose is to handle interactions, on which orchestrations are based. This article does not provide a formal definition of interaction nets; instead, it provides two examples, the first of which, shown in Figure 5, defines the System participation model in process PRBP. The second example is given in the next section. Interaction nets are made up of places, transitions, arcs and interaction symbols. Places are containers of tokens, which refer to business entities, such as PRs or reviews. They have a name and a label (or place type): the former is written in the circle representing the place and the latter outside the circle and close to it. The label is the name of the type of the business entities that are referred to by the tokens contained in the place. The business entity types are those
PR p2 t2
b.n
defined in the companion information model of the process being considered; the one related to process PRBP is shown in Figure 2. This solution eliminates the need for complex data patterns (Russell, ter Hofstede, Edmond, & van der Aalst, 2005) as all the data needed are associated with the tokens: the information flow is integrated with the control flow. Places may have input arcs and output ones: the input arcs of a place come from its input transitions and the output ones are directed to its output transitions. Transitions have input arcs and output ones, as well: the input arcs of a transition come from its input places and the output ones are directed to its output places. Tokens are added to places by their input transitions and are removed by their output transitions; they can also be added by followed interactions as described later on in this section. Transitions are the automatic activities performed by the process; they are token-driven as they are activated when the proper tokens are available in their input places. The basic rule is that a transition fires as soon as there is one token in each of its input places. When it fires, a transition takes the input tokens (i.e., it removes the activating tokens from the corresponding input places), performs an action (which can operate on the business entities referred to by the input tokens), and delivers the appropriate (output) tokens to its output places. When there is a correspondence between the types of the input places and those of the output places, tokens are simply moved from the input places to the output ones: this behavior
1257
People-Oriented Enterprise Information Systems
is referred to as automatic propagation rule. In the other cases, the output tokens must be generated in the action. The behavior of transitions can be complemented by a textual description that is made up of two sections, that is, the selection rule (introduced by keyword “s”) and the action (introduced by keyword “a”). These sections may include operational code, but the examples shown only include informal descriptions, for the sake of simplicity. If a transition is provided with a suitable selection rule, it can take a variable number of tokens from its input places and the selection can be based on the contents of the business entities linked to the tokens. If the selection rule is omitted, the basic rule is applied. Interaction symbols are depicted as labeled arcs, which may be either input arcs of places or output arcs of transitions. They represent ordinary interactions or monitored ones. Ordinary interactions are those in which the process is directly involved and may be divided into followed interactions and initiated ones. Their labels, such as “a.r,” consist of the name of a conversation type and an interaction name separated by a dot. The conversation types correspond to the names of the ports that appear in the System block in the process diagram. Followed interactions are depicted as input arcs of places, because the effect of a followed interaction is to add a new token (referring to the business entity conveyed by the interaction) to its output place. Initiated interactions are depicted as output arcs of transitions: when a token flows along such an arc (this occurs by virtue of the automatic propagation rule or by an explicit command issued in the input transition) the corresponding interaction is started and the follower is determined on the basis of the links defined in the top-level process model. Ordinary interactions appear in the left part of Figure 5. When an inexpensive PR is submitted by a requester, interaction “a.r” is performed and its effect is to add a token (referring to the PR) to
1258
place p1 whose type, PR, coincides with the type of the incoming business entity. Transition t1 is then triggered and its action consists in performing two interactions, “a.r+” and “b.n.” On the basis of the automatic propagation rule, the business entity taken from place p1 is sent through output interactions “a.r+” and “b.n,” without the need of writing any code in the action of t1. The recipient of interaction “a.r+” is the one that started conversation “a,” whereas the recipient of interaction “b.n” is determined by link l4 shown in Figure 3, because this interaction is the initial one of a conversation “b.” Monitored interactions are those interactions the process has to be aware of so as to take appropriate actions, although it is not their follower. They are shown dashed and their labels include the initiator role before the conversation type name and the interaction name. One such interaction appears in the right part of Figure 5: its purpose is to enable the activation of conversation “b” when conversation “a” for an expensive PR ends successfully. Conversations “a” for expensive PRs are handled by supervisors, but they are not supposed to decide what has to be done after the successful approvals of PRs. It is up to the process to trigger the subsequent processing of approved PRs; therefore, it must be informed of the successful conclusion of a conversation “a” so as to start a conversation “b” with the buyers—these two conversations operate on the same PR. Monitored interactions can only provide tokens to the process in that they are used to trigger complementary behavior.
cOMPArIsON OF NOtAtIONs The aim of this section is to provide a comparison between the notations used in the conversationoriented perspective and in the orchestrationoriented one. In the former approach, the emphasis is placed on the conversations between users and on how
People-Oriented Enterprise Information Systems
Figure 6. The conversation-oriented interaction flow for the scenario proposed RS
ystemS
VB
a.r v.r v.rep a.r+ b.n c.n
they are organized in participations: the explicit contribution of the process is restricted to the definition of the automatic activities, as shown in Figure 5 for process PRBP. In the orchestration-oriented perspective, processes are thought of as explicit orchestrators of all the activities, both the automatic ones and the user tasks. Conversations are not a first-class notion of the approach; therefore, the process has to make up for the lack of conversation-oriented constructs by directly handling the corresponding flow of interactions. Conversation-oriented notations provide a higher-level representation than orchestrationoriented ones and the difference between them can be easily recognized on the basis of a simple scenario, as follows. The scenario refers to a particular occurrence of process PRBP involving four users: requester R, his/her supervisor S, reviewer V and buyer B. Requester R submits an expensive PR, which is sent to S for evaluation. S asks the support of V and then approves the PR. Buyer B receives the PR from the process and then notifies R of its inclusion in a purchase order. On the basis of the scenario proposed, process PRBP shown in Figure 3 gives rise to a number of interactions that are graphically represented in the sequence diagram presented in Figure 6. The interaction flow develops as follows. When requester R submits an expensive PR, the pro-
cess starts a conversation between R and S in conformity to link l1, and the initial interaction takes place. Its label, “a.r,” is the name of a conversational interaction, as the interaction name is preceded by the name of the conversation type the interaction belongs to. In the participation model of the supervisor shown in Figure 4, interaction “a.r” is an extension point that can be followed by a number of conversations “v” with reviewers. In the scenario above, supervisor S asks reviewer V to provide a review of the PR and this leads to the pairs of interactions “v.r” and “v.rep” shown in the sequence diagram. Then the PR is approved by supervisor S resulting in interaction “a.r+.” This interaction is a monitored one as indicated in the System participation model shown in Figure 5, and therefore it triggers interaction “b.n” between the System and buyer B. The dashed arc connecting interaction “a.r+” and interaction “b.n” emphasizes that “a.r+” is a monitored interaction and “b.n” is a consequence. Buyer B is supposed not to need any additional clarification from the requester and therefore the scenario ends in notification “c.n” sent by B to requester R. With an interaction-oriented notation, the sequence diagram for the same scenario becomes more complicated, because interactions between users (from an initiator to a follower) are mapped to pairs of sequential interactions involving the
1259
People-Oriented Enterprise Information Systems
Figure 7. The orchestration-oriented interaction flow for the scenario proposed RS
(R)a_r
SV
B
ystem a_r(S)
(S)v_r
v_r(V)
(V)v_rep
v_rep(S)
(S)a_r+
(B)c_n
a_r+(R) b_n(B) c_n(R)
process: the first interaction takes place between the initiator and the process, the second one between the process and the follower. The orchestration-oriented sequence diagram related to the scenario proposed is shown in Figure 7. For example, interaction “a.r” from requester R to supervisor S is mapped to interaction “(R) a_r” from R to the process, and to interaction “a_r (S)” from the process to S. The name of the interaction is obtained from that of the corresponding conversational interaction by replacing the dot operator with character “_”. In addition, in a user-process interaction, the initiator is indicated between parentheses before the interaction name, whereas, in a process-user interaction, the follower is indicated between parentheses after the interaction name. The labels of the process-user interactions are shown on the right of the System lifeline, while those of the user-process interactions are shown above the corresponding lines. The notion of monitored interaction is not needed, as the process is an explicit orchestrator of all the interactions. A business process defined with an interactionoriented notation corresponds to a POBPN process that only includes the System participation model, as all the interactions between users are mapped to pairs of interactions involving the process. As
1260
an example, the orchestration-oriented version of process PRBP is shown in Figure 8. It is the second example based on the interaction nets introduced in the previous section, and includes the description of a number of transitions. To reduce the complexity of the model, a number of notational simplifications have been introduced, as follows. If there are two transitions in series (Murata, 1989), the intermediate place (together with its input arc and its output one) can be replaced with a link that directly connects the two transitions and is labeled with the name and type of the place eliminated. Examples of such links are the one connecting transitions tb1 and tb2 and the one connecting tv1 and tv2. If a place receives tokens only from a followed interaction and it is connected only to one transition, then it can be eliminated and the followed interaction can be directly connected to the transition: this simplification is carried out with several transitions, for example, tb1 and tb2. The third kind of simplification makes use of ε-arcs: an ε-arc links two places of the same type and its effect is that the tokens included in the source place also belong to the destination place and can trigger its output transitions (the vice versa does not hold); if two places, say p1 and p2, are the source and the destination, respectively, of an ε-arc, p1 is said to be joined to p2.
People-Oriented Enterprise Information Systems
Figure 8. The orchestration-oriented model of process PRBP (R)a_r p1 a_r+(R) b_n(B) t2 PR
c_r~(B)
c_n(R)
pb2, PR tb2
PR pb3
(B)c_n
PR PR
t1 p2
PR
(S)a_r~ a_r~(R)
pb1
(B)c_~r tb1 c_~r(R) (R)c_r~
ts1
tb3
(R)a_-r a_-r(S)
ts3 (R)a_~r
a_~r(S) (S)a_ra_r- (R) (S)a_r+ a_r+ (R)
The interaction labels shown in Figure 8 follow the same convention adopted in Figure 7. The meaning of the model shown in Figure 8 is as follows. Submitted PRs enter place p1 and then trigger either transition ts1 or transition t1 depending on which selection rule is satisfied. The incoming PR is associated with the input token taken from place p1 and the selection rules and the actions of transitions ts1 and t1 use the place name (i.e., p1) to refer to this PR. Transition ts1 fires in case of an expensive PR, as shown in its selection rule: its action consists in initiating interaction “a_r (S)” with the proper supervisor. The follower-selection rules, which in POBPN are presented in section “Links” of the top-level process model, with this notation, must be embedded in transitions. Primitive “set” is used to establish the follower of an initiated interaction. In the action of ts1, the boss of the requester of the input PR is set as “S”, because S denotes the follower of interaction “a_r”. Transition ts1 puts the PR into place ps1 indicating that the PR is waiting for the supervisor’s move.
a_r(S) PR
ps1
pv1
ts2 PR ps4 ts4 ts5
tv1 pv2, PR PR ps2 ps3 PR
ts6
tv2 PR
(S)v_r v_r(V) (V)v_rep v_rep(S)
pv3 tv3
pv4, PR tv4
(S)v_~rep v_~rep(V) (V)v_rep~ v_rep~(S)
PR pv5
Place ps1 has two outgoing flows. The one on the left handles the case in which the supervisor requests some modifications to the PR: then ts2 fires and puts the PR in place ps4 until the requester makes his/her move, which can be the withdrawal of the PR or the submission of a revised version: these moves cause ts3 or ts4 to fire, respectively. The firing of ts4 puts the PR in place ps2, where the left path ends. The left path is optional, because ps1 is joined to ps2 and ps2 is joined to both ps3 and pv1. Therefore the supervisor may immediately approve or reject the PR (from place ps3) or ask for external support (from place pv1). If the supervisor accepts or rejects the PR, ts6 or ts5 will fire, respectively; the firing of ts6 puts the PR in place p2 where the interaction flow involving the buyer starts. The path starting at place pv1 handles the interactions with the reviewers. Transition tv1 fires if the supervisor requests a reviewer’s support and initiates interaction “v_r” with the reviewer; the reviewer is the one represented by the last reviewer entity added to the PR, as shown in the action of tv1. The PR is then put in the intermediate place 1261
People-Oriented Enterprise Information Systems
included in the link between transition tv1 and transition tv2 (connected in series) and waits for the arrival of the corresponding review. When this happens, tv2 fires: it relays the review to the supervisor (i.e., the boss of the PR requester, as indicated in the action) and puts the PR in place pv3. Place pv3 is joined to place ps3 and place pv1, as the supervisor, at this point, may issue the final decision or ask for another review. In addition, the supervisor may ask the reviewer for some clarification: for this reason, place pv3 is also followed by transition tv3, which relays the clarification request to the reviewer. The path goes on with transition tv4, which is triggered by the reply from the reviewer and relays it to the supervisor; the PR is then put in place pv5 joined to places ps3 and pv1. Place p2 starts the interaction flow with a buyer. Transition t2 is meant to assign the PR to a buyer. The PR is then put in place pb1, which has two outgoing paths. The path on the left is optional: it enables the buyer to ask some clarification of the requester and ends in place pb3 to which place pb1 is joined. Transition tb3 waits for the confirmation from the buyer and relays it to the requester.
cONcLUsION This article has proposed a notation, POBPN, and a conversation-oriented perspective to effectively integrate conversations and business processes in an intra-organizational context. The involvement of users in business processes is clearly shown by means of participation building blocks appearing in the top-level representation of a business process. Such participation items can be refined into second-level models which are basically combinations of conversation models. An example has been presented in order to stress the differences between the orchestrationoriented perspective (followed by most of current notations and languages) and the conversation-
1262
oriented one. The former is a centralized approach in which all the interactions are explicitly mediated by the process. If a user looks at a process description, he or she cannot immediately identify the interactions he or she will be involved in, as all user activities are intermingled. In the approach proposed, instead, conversation models come first (from the actualization of standard conversation protocols) and the process model is built upon them. The explicit contribution of the process is restricted to those conversations for which the process itself plays the initiator role or the follower one. Current work is going on in two directions. One line of research is concerned with the inclusion of inter-organizational conversations. For this reason, the similarities between conversation protocols and B2B collaborations or choreographies need to be thoroughly analyzed. The other line of research is more pragmatic and addresses the automatic mapping of POBPN models to orchestration-oriented ones; the purpose is to take advantage of current technology while preserving the conceptual strength of the conversation-oriented perspective.
rEFErENcEs Austin, J. L. (1976). How to do things with words. Oxford, UK: Oxford University Press. Beeson, I., & Green, S. (2003, April 9-11). Using a language action framework to extend organizational process modelling. Paper presented at the UK Academy for Information Systems Conference, University of Warwick. Dietz, J. L. G. (2003). The atoms, molecules and fibers of organizations. Data & Knowledge Engineering, 47(3), 301–325. doi:10.1016/S0169023X(03)00062-4
People-Oriented Enterprise Information Systems
Dietz, J. L. G. (2006). The deep structure of business processes. Communications of the ACM, 49(5), 59–64. doi:10.1145/1125944.1125976 Dumas, M., van der Aalst, W. M. P., & ter Hofstede, A. H. M. (2005). Process-aware information systems: Bridging people and software through process technology. New York: John Wiley & Sons. Goldkuhl, G., & Lind, M. (2004). The generics of business interaction - emphasizing dynamic features through the BAT model. In M. Aakhus & M. Lind (Eds.), Proceedings of the 9th Conference on the Language-Action Perspective on Communication Modelling (LAP 2004) (pp. 1-26). New Brunswick, NJ: Rutgers University. Kristensen, L. M., Christensen, S., & Jensen, K. (1998). The practitioner’s guide to coloured Petri nets. International Journal on Software Tools for Technology Transfer, 2, 98–132. doi:10.1007/ s100090050021 Medina-Mora, R., Winograd, T., Flores, R., & Flores, F. (1992, November 1-4). The action workflow approach to workflow management technology. In J. Turner & R. Kraut (Eds.), Proceedings of the 1992 ACM Conference on Computer Supported Cooperative Work (pp. 281-288). Toronto, Canada: ACM Publishing. Murata, T. (1989). Petri nets: Properties, analysis and applications. Proceedings of the IEEE, 77(4), 514–580. doi:10.1109/5.24143 OASIS. (2007). Web services business process execution language, V.2.0. Retrieved September 29, 2008, from http://docs.oasis-open.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.html
Object Management Group. (2007). Unified modeling language: Superstructure, V.2.1.1. Retrieved September 29, 2008, from http://www.omg.org/ docs/formal/07-02-03.pdf Object Management Group. (2008). Business process modeling notation, V.1.1. Retrieved September 29, 2008, from http://www.bpmn.org Ould, M. (2005). Business process management: A rigorous approach. Swindon, UK: The British Computer Society. Russell, N., ter Hofstede, A. H. M., Edmond, D., & van der Aalst, W. M. P. (2005). Workflow data patterns: Identification, representation and tool support. In L. Delcambre (Ed.), Lecture Notes in Computer Science, 3716 (pp. 353-368). Berlin, Germany: Springer. Russell, N., van der Aalst, W. M. P., ter Hofstede, A. H. M., & Edmond, D. (2005). Workflow resource patterns: Identification, representation and tool support. In O. Pastor & J. Falcão e Cunha (Eds.), Lecture Notes in Computer Science, 3520 (pp. 216-232). Berlin, Germany: Springer. van der Aalst, W. M. P., ter Hofstede, A. H. M., Kiepuszewski, B., & Barros, A. P. (2003). Workflow patterns. Distributed and Parallel Databases, 14, 5–51. doi:10.1023/A:1022883727209 Weigand, H. (2006). Two decades of the language-action perspective: Introduction. Communications of the ACM, 49(5), 44–46. doi:10.1145/1125944.1125973 Winograd, T. (1987). A language/action perspective on the design of cooperative work. HumanComputer Interaction, 3, 3–30. doi:10.1207/ s15327051hci0301_2
1263
People-Oriented Enterprise Information Systems
Winograd, T., & Flores, F. (1986). Understanding computers and cognition. Norwood, NJ: Ablex Publishing Corporation.
Workflow Management Coalition. (2007). XML process definition language, V.2.1. Retrieved September 29, 2008, from http://www.wfmc.org
This work was previously published in Social, Managerial, and Organizational Dimensions of Enterprise Information Systems, edited by Maria Manuela Cruz-Cunha, pp. 63-80, copyright 2010 by Information Science Reference (an imprint of IGI Global).
1264
1265
Chapter 5.6
A SOA-Based Approach to Integrate Enterprise Systems Anne Lämmer University of Potsdam, Germany Sandy Eggert University of Potsdam, Germany Norbert Gronau University of Potsdam, Germany
AbstrAct
INtrODUctION
This chapter presents a procedure for the integration of enterprise systems. Therefore enterprise systems are being transferred into a service oriented architecture. The procedure model starts with decomposition into Web services. This is followed by mapping redundant functions and assigning of the original source code to the Web services, which are orchestrated in the final step. Finally, an example is given how to integrate an Enterprise Resource Planning System with an Enterprise Content Management System using the proposed procedure model.
Enterprise resource planning systems (ERP systems) are enterprise information systems designed to support business processes. They partially or completely include functions such as order processing, purchasing, production scheduling, dispatching, financial accounting and controlling (Monk et. al., 2005). ERP systems are the backbone of information management in many industrial and commercial enterprises and focus on the management of master and transaction data (Sumner 2005). Besides ERP systems, Enterprise Content Management Systems (ECM systems) have also developed into companywide application systems over the last few years.
DOI: 10.4018/978-1-60566-968-7.ch008
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A SOA-Based Approach to Integrate Enterprise Systems
ECM solutions focus on indexing all information within an enterprise (Rockley, 2003). They cover the processes of enterprise-wide content collection, creation, editing, managing, dispensing and use, in order to improve enterprise and cooperation processes (CMSWatch, 2009). In order to manage information independently, ECM combines technologies such as document management, digital archiving, content management, workflow management, etc. The use of ECM systems is constantly on the rise (Lämmer et. al 2008). This leads to an increasing motivation for enterprises to integrate the ECM systems within the existing ERP systems, especially when considering growing international competition. The need for integration is also eminently based on economical aspects, such as the expense factor in system run time (Aier & Schönherr, 2006). For a cross-system improvement of business processes, enterprise systems have to be integrated.
rELAtED WOrK service Oriented Architecture as an Integration Approach A number of integration approaches and concepts already exist. They can be differentiated by integration level (for example data, functions or process integration) and integration architecture (for example point-to-point, hub & spoke, SOA) (Aier & Schönherr, 2006). This paper presents an approach to integrating enterprise systems by way of building up service oriented architectures. This integration approach is of special interest and will be described in more detail. The concept of service orientation is currently being intensively discussed. It can be differentiated from component orientation by its composition and index service (repository). Additionally, SOA is suitable for a process oriented, distributed integration (Aier & Schönherr, 2006). However, the addressed goals of component orientation
1266
and SOA are similar: different enterprise systems are connected through one interface, and a crosssystem data transfer and the re-usage of objects or components is enabled. Thereby a service represents a well defined function which is generated in reaction to an electronic request (Burbeck, 2000). The SOA approach offers a relatively easy way to connect, add and exchange single services, which highly simplifies the integration of similar systems (e.g. enterprise take-over). Moreover, SOA offers a high degree of interoperability and modularity, which increases the adaptability of enterprise systems (Andresen et. al., 2008), (Lämmer et.al. 2008). The SOA approach is based on the concept of service. The sender wants to use a service and in doing so he wants to achieve a specific result. Thereby the sender is not interested in how the request is processed or which further requests are necessary. This is the idea of SOA, where services are defined in a specific language and referenced in a service index. Service request and data exchange occur via use of pre-defined protocols (Erl, 2008), (Papazoglou, 2007). This service orientation can be used on different levels of architecture. The grid architecture is a common example of infrastructure level (Bermann et. al., 2003), (Bry et. al., 2004). On the application level an implementation usually takes place in terms of web services. The use of web services offers the possibility of re-using raw source code, which is merely transferred to another environment (Sneed, 2000). The benefit of this transfer is the re-usage of perfected (old) algorithms. The main disadvantage is the necessity of revising the raw source code in order to find possible dependencies (Sneed, 2000). This is also true for enterprise systems. It isn’t efficient to re-use the entire old system, but rather only significant parts of it. To accomplish this it is necessary to deconstruct the old enterprise system and to locate the source code parts which can effectively be re-used. Our approach uses self-diagnosis for finding these source code
A SOA-Based Approach to Integrate Enterprise Systems
locations. This analysis will be considered in the third integration step.
self-Diagnosis As just described, our approach uses self-diagnosis for location of useful source code. For this, the method of self-diagnosis will be presented and the differences to other approaches will be shown. Some approaches for transformation of legacysystems into a SOA exist already. However, these approaches see the whole system as one service. The system gets a service description for using this service in a SOA. Our approach differs in that it deconstructs the system for a tailored need. For this, the method of self-diagnosis is used. Self-diagnosis can be defined as a system’s capacity to assign a specific diagnosis to a detected symptom. The detection of symptoms and assignment are performed by the system itself without any outside influence (Latif-Shabgahi et. al., 1999). The mechanism of self-diagnosis has been detected surveying natural systems; it can partly be applied to artificial systems as well. Self-diagnosis can be seen as an integral part of all self-organising systems. Self-organisation is a generic term for self-x-capabilities also called selfware. Self-organisation should lead information systems to become self-healing, self-protecting, self-optimizing and self-configuring (Hinchey et. al., 2006), (Garnek, 2007). For reaching these aims,
the system has to possess abilities of self-detection or self-diagnosis. Sensors to detect contextual and the environmental changes as well as a monitor for continuous analysis should be a central part of a self-organizing information system and create the basis for the following: • • •
•
For self-configuration the possibilities and needs of configuration must be recognized. For self-healing the symptoms of an infection must be identified. For a self-optimization it is necessary to search for new calculations, algorithms and solution space or process chains. The self-protection needs detectors for identification of attacks.
Figure 1 shows a systematization of selfware. Beyond this, self-diagnosis can be seen as an integral part of all self-organising systems. Because of the necessity to achieve all aims of self-organising systems, self-diagnosis has to be subordinated to self-management. Today some approaches for self-diagnosis in computer systems already exist i.e. fault detection in networking communication, or in storage and processor (Sun, 2004). Self-diagnosis also works in infrastructure and load sharing for performance (Mos, Murphy, 2001). In this article a proposal for the implementation of self-diagnosis of enterprise application systems and particularly
Figure 1. Systematization of selfware
1267
A SOA-Based Approach to Integrate Enterprise Systems
in the application layer of information systems is presented without using case based reasoning or multi agent systems. The first step of self-diagnosis is the detection of symptoms. Usually the detection of one existing symptom is not sufficient to make an indisputable diagnosis. In this case, more information and data have to be gathered. This can be described as symptom collection. In a second step the symptoms are assigned to a specific diagnosis. Depending on the diagnosis, corresponding measures can be taken (Horling et. al., 2001). Symptoms are a very abstract part of selfdiagnosis. These symptoms can be a high network load in distributed systems, missing signals, or buffer overload of the hardware layer. For enterprise systems the symptoms can be e.g. the frequency of usage of user interface elements by the user, dependencies of code parts or components. Other types of symptoms are possible. In general, the answer to questions concerning the measure of interesting items provides hints for possible symptoms.
Differentiation of Collection Self-diagnosis can be categorized by symptom acquisition method. Active and passive selfdiagnosis must also be distinguished. In this context, the program or source code is the crucial
factor for a division between active and passive self-diagnosis. A fundamental basis for either alternative is an observer or monitor. Using passive self-diagnosis, the monitor detects and collects symptoms and information. It can either be activated automatically or manually (Satzger et. al., 2007). If you know which items need to be observed and the point where this information can be gathered, you only have to monitor this point. This is what passive selfdiagnosis does. For example: if you want to know how often a button is pressed, you have to find where the button-event is implemented in the code and observe this button-event. In active self-diagnosis, the program’s function or modules are the active elements. They send defined information to the monitor and act independently if necessary. The monitor is used as receiver and interprets the gathered information and symptoms. The main advantage of active self-diagnosis is the possibility of detecting new symptoms, even if no clear diagnosis can be made before the problems become acute and are forwarded to other systems. In contrast, using passive self-diagnosis, the monitor can only inquire about specific data. In this case, a response or further examination is only possible if the problem is already known. For example: if you don’t know the location of all the buttons and or the code component for the button-event, you will
Figure 2. Kinds of acquisition of symptoms for self-diagnosis; a) passive; b) active
1268
A SOA-Based Approach to Integrate Enterprise Systems
have to recognise all events with their initial point and filter them with the monitor. The monitor doesn’t have to know how many buttons exist or where their code is located, but the buttons have to “know” to register with the monitor. These are the requirements of active self-diagnosis. The assembly of diagnosis points depends on application context and software system. The required time and effort cannot be specified; it depends on the design and implementation of the software system. Self-diagnosis can also be employed for the examination of source-code usage and interdependences. Depending on the desired information, different points of diagnosis have to be integrated into the source-code. Different points of diagnosis have to be determined in order to allow for the allocation of code parts to various fields and functions. Therefore context, programming language, and software system architecture must be considered.
Differentiation of Layers Self-diagnosis can be employed for the examination of source code usage and interdependences. According to the desired information different markers have to be integrated into the source-code. Depending on the desired outcome, source code usage and interdependences could be objects of an analysis. Three layers can be distinguished: (1) source code layer, (2) component layer and (3) application layer. For self-diagnosis of the source code layer the markers have to be integrated into the functions or methods. This offers a very specific analysis of the system’s bottom, but this is connected with heavy efforts and a deep knowledge about the programming of the system. Self-diagnosis of the component layer integrates the markers into functional combined code fragments of the components of the information
system. The amount of markers is getting smaller accordingly. This implementation is important for the usage of dependencies (detection of causality) about the different components of the system. The third layer is the application layer. Thereby one marker is used for one application. Usage for this implementation is mainly the surveying of enterprise application landscapes.
Differentiation of Patterns For self-diagnosis, data are transmitted to a monitor either from the code or the application. Depending on the purpose, patterns can be identified which refer to the frequency (frequency pattern) or markers reporting the dependencies of code fragments (causality pattern). The frequency pattern provides information about the usage of parts of an application and allows drawing conclusions on user as well as the temporal details. The monitor does not view the content of the message. It counts the frequency of a message and saves the corresponding metadata, e.g. invoking user or instant of the request for information. The application context of the frequency pattern is particularly interesting. In order to supervise the use of code, frequency patterns could be used. By using frequency pattern it is easily possible to analyze the exact usage of certain functions on the code layer. This analysis could be used for maintenance. The maintenance overhead increases at the increasing complexity of operational standard software at any function which was taken over in the standard. In run time it is usually unknown which functions are really used. By detecting the use of frequencies in the runtime a slim down of the software system can be carried out to make sure that the maintenance overhead is done only for the functions which also are needed. The causality pattern shows the dependencies of certain code fragments between each other. Again, the focus is on the metadata, like for
1269
A SOA-Based Approach to Integrate Enterprise Systems
example users and time. Unlike the frequency pattern, the causality pattern collects information of the code fragment invoked before and the code fragment to be invoked. For this the monitor must be able to determine and to save the predecessor and the successor of every message. By analyzing this, dependencies about the code fragments are possible. The causality pattern is particularly important for the examination of a system for the connection to the business process or to the encapsulation of code fragments. An application scenario for the causality pattern at the component layer is found in the area of the system integration. Here the dependencies of the individual components between each other are important for the isolation or reprocess of components and to wrap these into a web service (Sneed, 2000). The context pattern examines not only the metadata but recognizes messages belonging together. This is a semantic evaluation of the messages. In this case, the cost for developing the monitor is much higher than within the other two detection models. This is due to the fact that messages might have a different wording and be written in different languages. Our approach uses this method to locate code parts that can be collected into components. As we will demonstrate later in this article, we need to locate functions and enterprise systems business objects. This method can be used for the detection of code parts which are possible services. Diagnosis points must thereby be integrated into the system source code, and software dependencies analysed. As discussed earlier the main challenges in integration of legacy enterprise systems like ERP and ECM are, first, the deconstruction and second, the allocation of code. To address these challenges, we have developed a procedure model which will be described next.
1270
PrOcEDUrE MODEL In the following, a procedure model that integrates general application systems within a company is presented. The procedure model begins with the deconstruction of systems into web services. This is followed by a mapping of redundant functions and the assignment of original source code to web services, which is orchestrated in the last step. The process includes taking the old ERP system; deconstructing it into different abstraction levels such as functional entities or business objects, searching for redundant entities, allocating the code fragments dependent on these functional entities, and encapsulating them. This results in many independent functional entities, which can be described as a service. They have different abstraction levels and have to compose and orchestrate with, e.g. BPEL-WS. This composition and orchestration is the way of integration.
Deconstruction of systems First, the systems which are to be integrated are deconstructed into services. The challenge of this step depends on the number of particular services, which could span the range from one single service per system, up to a definition of every single method or function within a system as a service. In the case of a very broad definition, the advantages, such as easy maintenance and reuse etc., will be lost. In case of a very narrow definition, disadvantages concerning performance and orchestration develop; the configuration and interdependencies of the services become too complex. This paper proposes a hierarchical approach which describes services of different granular qualities on three hierarchical levels. Areas of function of a system are described as the first of these levels (figure 3, part 1). For example, an area of functions could include purchase or sales in case of ERP systems and, in the case of ECM systems, archiving or content management. An area of
A SOA-Based Approach to Integrate Enterprise Systems
Figure 3. Procedure model for the integration of application systems
1271
A SOA-Based Approach to Integrate Enterprise Systems
function can be determined on the abstract level by posing questions about the general “assigned task” of the system. The differences between the three hierarchical levels can be discovered by answering the following questions: 1.
2.
3.
Question: What are the tasks of the particular system? The answers resulting from this step correspond to services on the first level, which constitute the general task. For example Sales, purchase, inventory management or workflow management, archiving and content management. These tasks are abstract and describe main functionalities. They consist of many other functions which are the objects of the next level. Question: Which functionality derives from every single task? The answers to this question correspond to the services on the second level that are contributed by the various functions. These functions are more detailed than the general tasks. They describe what the tasks consist of and what they do, for example: calculate the delivery time, identify a major customer, or constitute check-in and E-Mail functionalities. For these functions the application needs data, which can found in the third level. Question: Which business objects are utilised by both systems? This answer constitutes the number of business objects that will be used as basic objects in both systems, e.g. article data, customer data or index data.
In this procedure model, all possible levels of service deconstruction are addressed; yet the realisation on all hierarchical levels constitutes an individual task. The result of this step is a 3-stage model displaying the services of an application. The data-level, i.e. the integration of databases, is not further examined at this point since it is not an integral part of our model, the aim of which is to wrap functions as web services without altering
1272
them or the original source code. The data level is not touched by this process.
Preparation and Mapping The main advantage of web service architecture is the high degree of possible reuse. By division into three hierarchical levels, a detection of similar functions is made possible, especially on the level of functionality and business objects. In some cases an adjustment of the functions is necessary in order to serve different contexts of use. Therefore, the next step consists of integration on different levels and the mapping of identical functions (figure 3, part 2). This step poses the following questions: 1.
2.
3.
Question: Which tasks, functions and business objects appear more than once? For example: most applications contain search functions, some applications have functions for check in and check out, ERP systems calculate the time for many things with the same algorithm under different names. Question: Are these multiple functions and objects redundant, i.e. superfluous, or do they provide different services? Some functions may have the same name, but perform different tasks. Question: Can these multiple functions and objects be combined by way of appropriate programming? For the functions ascertained in question 2 to be similar functions with different names, the possibility of integrating them into one has to be analysed.
The advantage of this mapping is the detection of identical functions, which may by only named differently while completing the same task. In doing so, the benefit of reuse can be exploited to a high degree. Additionally, this part of the survey allows for a minimisation of programming, due to encapsulation of multiple functions. Only those functions which share a high number of
A SOA-Based Approach to Integrate Enterprise Systems
similarities, but nevertheless complete different tasks, have to be programmed differently; they can be merged by reprogramming. It is important to note that this part the deconstruction consists of an abstract level and in the functional view. In the following step, this will change: from a functional view to a code view.
Detection and Assignment of services to code Fragments The next step brings the biggest challenge, namely the transformation of existing applications into service oriented architecture. Until now, services have been identified by their tasks, but the correlation to existing source code still needs to be done. This is going to be accomplished in the next step (figure 3, part 3). Self-diagnosis is used at this point to integrate earlier defined points of diagnosis into the source code. These points of diagnosis actively collect usage data and facilitate conclusions concerning the fields and functions via their structure. The structure of the points of diagnosis depends on the context of their application and on the software system. It is not possible to describe the complexity of the process, which also depends on the structure and programming of the software systems. As we discussed earlier in section 2.2, the points of diagnosis depend on what needs to be observed. Here we want to know which code fragments share correspondences and execute the identified functions in the functional view. From this follows the necessity of a monitor. For example, the points can be every method call in the source code of an ERP system. If the user calls a function, the points of diagnosis have to inform the monitor that they were called. The monitor has to recognise and to analyse which method calls belong together. Now the code fragments are analysed and assigned to the functions identified in part 1, and the wrapping of code fragments into web services
can be started. This step necessitates the usage of the existing source code and the description of relevant parts with a web service description language, making possible the reuse of source code in service oriented architecture. If redundant services have been detected in part two which need to be reengineered, then the reengineering happens now.
Orchestration of Web services The results of stage three are the described web services. These have to be connected with each other depending on the business process. This orchestration takes place in several steps (figure 3, part 4). First, the context must be defined; second, the service description language has to be selected; and third, the web services need to be combined. A four-stage procedure model for a service oriented integration of application systems has just been described. This process holds the advantages of a step-by-step transformation. The amount of time needed for this realisation is considerably higher than in a “big bang” transformation, however, a “big bang” transformation holds a higher risk and therefore requires high-quality preparation measures. For this reason, a “big bang” transformation is dismissed, in favour of a step-by-step transformation. There is yet another important advantage in the integration or deconstruction of application systems into services, when carried out in several steps. First, a basic structure is built (construction of a repository, etc.). Next, a granular decomposition into web services occurs on the first level, thereby realising a basic transformation of a service oriented concept. Following this, web services of the second and third hierarchical level can be integrated step-by-step. This reduction into services provides high quality integration. The procedure model we just presented is very abstract. Therefore, a practical example for
1273
A SOA-Based Approach to Integrate Enterprise Systems
two enterprise systems, ERP and ECM, will be given in part 4.
EXAMPLE OF APPLIcAtION It is necessary to develop a general usage approach and to test it on ERP and ECM systems, since no concrete scenario of these technologies in regard to praxis as of yet exists (Gronau, 2003). The aim of this example of use is to describe the integration of both company-wide systems, ERP and ECM, using our presented approach. In what follows, we present a case study of a situation of integration of two systems: an ERP and an ECM system. A German manufacturer of engines and devices administrate a complex IT landscape. This IT landscape includes, among others, two big enterprise systems. One of them is the ERP system “Microsoft Dynamics NAV” and the other is the ECM system “OS.5|ECM” of Optimal Systems. The ERP System includes Figure 4. Segmentation of services
1274
modules such as purchase, sales and article management. The ECM system consists of modules such as document management, archiving and workflow management. In the current situation a bi-directional interface between both systems exists. One example for a business process in which both systems are used is the processing of incoming mails and documents. In order to scan and save the incoming invoices of suppliers, the module of the ECM System “document management” is used. The access to the invoice is made possible through the ERP system. In the future, a SOA-based integration of both enterprise systems can be reasonably expected under the aspect of business process improvement. Referring to the example mentioned above, the “portal management” component could be used to access, search, and check-in all incoming documents. What now follows is a description, in four parts, of the integration based on the procedure model we presented in part 3.
A SOA-Based Approach to Integrate Enterprise Systems
Part 1: segmentation of ErP and EcM systems into services According to the procedure model (figure 3), the individual systems will be separated into independent software objects, which in each case complete specified functions or constitute business objects. The segmentation is structured in three bottom-up steps (Figure 4). Identification is based on the answers to questions concerning main tasks of specific systems. The basic functions of an ERP system are purchase, sales, master data management, article management und repository management. Document management, content management, records management, workflow management and portal management are basic functions of ECM systems. Subsequently, the areas of functions are disaggregated into separate tasks. Business objects are classified such as the business object “article” or “customer”. Thus, segmentation in areas of functions, tasks of functions and business objects is achieved and a basis for the re-usage of services is created.
Part 2: Preparation of Integration/Mapping The results of the first step of the segmentation are separation of services of differentiated granularity per system. According to the procedure model, the mapping on the different areas will be arranged in the second step. For that purpose, the potential services described will be examined for similarities. On every level of hierarchy, the functional descriptions (answers to questions in part 1) of services are checked and compared with each other. If functions or tasks are similar, they will have to be checked for possibility of combination and to be slotted for later reprogramming. One example of such similarity between functions is “create index terms”. Most enterprise systems include the function “create index terms” for documents such as invoices or new articles. The estimation
of analogy of different functions, particularly in enterprise systems where implementation is different, lies in the expertise of the developer. Another example is the service “check in/check out”. This service is a basic function of both ERP and ECM systems and is now to be examined for possible redundancy. After determining that the services “check in” or “check out” are equal, the service will be registered as a basic function only once. Services which are not equal but related will be checked in another step and either unified with suitable programming or, if possible, spilt into different services. The results of this step are the classification of services from ERP and ECM systems into similar areas and the separation of redundant services. The following table shows examples of separated services. By this separation of both enterprise systems, a higher degree of re-usage and improved complexity-handling of these systems is achieved. For the application of services, a service-oriented architecture (SOA) which defines the different roles of participants is now required (Burbeck, 2000).
Part 3: Detection and Assignment of services to code Fragments As already described in the general introduction, the identification of functions to be segmented Table 1. Examples of separate services of ERP and ECM systems Separate services Basic Functions
Areas of Functions
ERP
ECM
Purchase
content management
Sales
archiving
article management
document management
repository management
workflow management
check in
email connection
identify delivery time
save document
check out
create index terms
1275
A SOA-Based Approach to Integrate Enterprise Systems
in the source code constitutes one of the biggest challenges in a transfer to service-oriented architecture. As part of this approach, the method of self diagnosis is suggested. Appropriate points of diagnosis will be linked to the source code in order to draw conclusions from used functions to associated class, method or function in the original source code. Through the use of aspect oriented programming, aspects can be programmed and linked to the classes and methods of the application system. Necessary data, such as the name of the accessed method, can be collected by accessing the respective classes and methods (Vanderperren et. al., 2005). Based on a defined service, “order transaction”, all names of methods which are necessary for the execution of “order transaction”, must be identified. To wrap the service “order transaction”, i.e. to combine it with a web service description language, the original methods need be searched for and encapsulated. Additionally, the reprogramming of redundant functions is part of the phase of identification and isolation of services. This, as well, is only possible if the original methods are identified.
Part 4: Orchestration of Web services The last integration phase is used to compile web services. The previous steps had to be completed in preparation for the procedure model. The web services now are completely described and have an URI to be accessed. Now, only the composition and the chronology of requests of the specific web services are missing. For the orchestration the web service business process execution language (WS-BPEL) is recommended. The WS-BPEL was developed by the OASIS-Group and is currently in the process of standardisation (Cover, 2005). If the web services present a function with a business process, the WS-BPEL is particularly suitable for orchestration of web services (Lübke et. al., 2006). Essentially, BPEL is a language to com-
1276
pose (Leymann & Roller 2000) new web services from existing web services with help of workflow technologies (Leymann, 2003). In BPEL, a process is defined which is started by a workflow system in order to start a business process. Web services are addressed via a graphical representation with a modelling imagery of WSBPEL. The business process is modelled independently from the original enterprise systems. Since in the first integration step, the systems were separated by their tasks and functions, now all of the functions are available for the business process as well.
cONcLUsION critical consideration The procedure model for the integration of application systems as it has been presented in this paper is an approach that has been successfully deployed in one case. Currently the assignment ability and the universality are being tested. The self-diagnosis, i.e. the assignment of source code to services via aspect oriented programming, constitutes a bigger challenge. A verification of costs and benefit cannot be given sufficiently; however, several examples show convincing results and suggest a general transferability. The complexity in such a realisation cannot be specified. Particularly for bigger and complex systems, the cost to benefit ratio has to be verified. Despite this, it must be recognised that the assignment of code fragments to functions is not an easy task. If one observes every method call, a high number of calls must be analysed. Visualisation can be helpful for analysing, since method calls belonging together will build a cluster in the emerging network. The observation of method calls is possibly not the most optimal way for very complex systems. If the functional view of services in part 1 is not part of the business object layer, but only of the general task layer,
A SOA-Based Approach to Integrate Enterprise Systems
one can reduce the numbers of diagnosis points. The possibilities depend on the programming language and their constructs.
resume The approach presented above describes a procedure model for service oriented integration of different application systems. The integration proceeds using web services which thereby improve the integration ability, interoperability, flexibility, and sustainability. The reusable web services facilitate the extraction of several functions and combination of these into a new service. This allows for reuse of several software components. Altogether, Web services improve the adaptability of software systems to the business processes and increase efficiency (Gronau, 2003). To give an example of the realisation of the procedure model, an integration of an ERP- and ECM-system was chosen. The reasons for this choice consist in targeted improvement of business aspects and increasing complexity of both application systems. Dealing with this complexity makes integration necessary. Through mapping, redundant functions can be detected and as a consequence, a reduction of the complexity is made possible. Regarding the adaptability and flexibility of affected application systems, web services are a suitable approach for integration. In particular, it is the reuse of services and an adaptable infrastructure which facilitate the integration. In addition to all of this, we expect to discover additional advantages concerning maintenance and administration of affected application systems.
rEFErENcEs Aier, S., & Schönherr, M. (2006). Evaluating integration architectures – A scenario-based evaluation of integration technologies. In D. Draheim, & G. Weber (Eds.), Trends in enterprise application architecture, revised selected papers (LNCS 3888, pp. 2-14).
Andresen, K., Levina, O., & Gronau, N. (2008). Design of the evolutionary process model for adaptable software development processes. Paper presented at the European and Mediterranean Conference on Information Systems 2008 (EMCIS2008) Berman, F., Fox, G., & Hey, T. (2003). Grid computing. Making the Global infrastrucure a Reality. Wiley. Bry, F., Nagel, W., & Schroeder, M. (2004). Grid computing. Informatik Spektrum, 27(6), 542–545. Burbeck, S. (2000). The Tao of e-business services. IBM Corporation. Retrieved October 7, 2006, from http://www.ibm.com/software/developer/ library/ws-tao/index.html CMSWatch. (2009). The ECM Suites Report 2009, Version 3.1. CMS Works, CMS Watch Olney. Cover, R. (2004). Web standards for business process modeling, collaboration, and choreography. Retrieved October 7, 2001, from http:// xml.coverpages.org/bpm.html. Erl, T. (2008). Web service contract design and versioning for SOA. Prentice Hall International. Ganek, A. (2007). Overview of autonomic computing: Origins, evolution, direction. In M. Parashar, & S. Harir (Eds.), Autonomic computing (pp. 3-18). New York: Taylor & Francis Group. Gronau, N. (2003). Web services as a part of an adaptive information system framework for concurrent engineering. In R. Jardim-Goncalves, J. Cha, & A. Steiger-Garcao (Eds.), Concurrent engineering: Enhanced interoperable systems. Hinchey, M. G., & Sterritt, R. (2006). Selfmanaging software. Computer, 40(2), 107–111. doi:10.1109/MC.2006.69 Horling, B., Benyo, B., & Lesser, V. (2001). Using self-diagnosis to adapt organizational structures (Tech. Rep. TR-99-64). University of Massachusetts.
1277
A SOA-Based Approach to Integrate Enterprise Systems
Kuropka, D., Bog, A., & Weske, M. (2006) Semantic enterprise services platform: Motivation, potential, functionality and application scenarios. In Proceedings of the tenth IEEE international EDOC Enterprise Computing Conference, Hong Kong (pp. 253-261).
Mos, A., & Murphy, J. (2001). Performance monitoring Of Java component-oriented distributed applications. Paper presented at the IEEE 9th International Conference on Software, Telecommunications and Computer Networks - SoftCOM 2001.
Lämmer, A., Eggert, S., & Gronau, N. (2008). A procedure model for a SOA-Based integration of enterprise systems. International Journal of Enterprise Information Systems, 4(2), 1–12.
Papazoglou, M. P. (2007). Web services: Principles and technology. Prentice Hall.
Latif-Shabgahi, G., Bass, J. M., & Bennett, S. (1999). Integrating selected fault masking and self-diagnosis mechanisms. In Proceedings of the Seventh Euromicro Workshop on Parallel and Distributed Processing (pp. 97-104). IEEE Computer Society. Leymann, F., & Roller, D. (2000). Production workflow - Concepts and techniques. Prentice Hall International. Leymann, F., & Roller, D. (2006). Modeling business processes with BPEL4WS. Information Systems and E-Business Management, 4(3), 265–284. doi:10.1007/s10257-005-0025-2 Lübke, D., Lüecke, T., Schneider, K., & Gómez, J. M. (2006). Using event-driven process chains fo model-driven development of business applications. In F. Lehner, H. Nösekabel, & P. Kleinschmidt (2006), Multikonferenz Wirtschaftsinformatik 2006 (pp. 265-279). GITO-Verlag. Monk, E., & Wagner, B. (2005), Concepts in enterprise resource planning (2nd ed.). Boston: Thomson Course Technology.
Rockley, A. (2003). Managing enterprise content. Pearson Education. Satzger, B., Pietzowski, A., Trumler, W., & Ungerer, T. (2007). Variations and evaluations of an adaptive accrual failure detector to enale self-healing properties in distributed systems. In P. Lukowicz, L. Thiele, & G. Tröster (Eds.), Architecture of computing systems - ARCS 2007 (LNCS 4415, pp. 171-184). Sneed, H. M. (2000). Encapsulation of legacy software: A technique for reusing legacy software components. Annals of Software Engineering, 9, 293–313. doi:10.1023/A:1018989111417 Sumner, M. (2005). Enterprise resource planning. NJ: Pearson Education. Sun (2004). Predictive self-healing in the Solaris 10 operation system. A technical introduction. Retrieved from http://www.sun.com/bigadmin/ content/selfheal/selfheal_overview.pdf Vanderperren, W., Suvée, D., Verheecke, B., Cibrán, M. A., & Jonckers, V. (2005). Adaptive programming in JAsCo. In Proceedings of the 4th international conference on Aspect-oriented software development. ACM Press.
This work was previously published in Organizational Advancements through Enterprise Information Systems: Emerging Applications and Developments, edited by Angappa Gunasekaran and Timothy Shea, pp. 120-133, copyright 2010 by Business Science Reference (an imprint of IGI Global).
1278
1279
Chapter 5.7
Achieving Business Benefits from ERP Systems Alok Mishra Atilim University, Turkey
AbstrAct Enterprise resource planning (ERP) systems are becoming popular in medium and large-scale organizations all over the world. As companies have to collaborate across borders, languages, cultures, and integrate business processes, ERPs will need to take globalization into account, be based on a global architecture, and support the features required to bring all the worldwide players and processes together. Due to the high cost of implementation for these systems, organizations all over the world are interested in evaluating their benefits in the short and long terms. This chapter discusses various kinds of business benefits in a comprehensive way in order to justify the acDOI: 10.4018/978-1-59904-531-3.ch005
quisition and implementation of ERP systems in organizations in the present global context.
INtrODUctION Enterprise resource planning (ERP) systems have become very important in modern business operations as ERP has played a major role in changing organizational computing for the better. One study found that more than 60% of Fortune 500 companies had adopted an ERP system (Stewart, Milford, Jewels, Hunter, & Hunter 2000). These systems have been credited with reducing inventories, shortening cycle times, lowering costs, and improving supply chain management practices. ERP has been credited with increasing the speed with which information flows through a company
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Achieving Business Benefits from ERP Systems
(Davenport, 1998). ERP has also been credited with creating value through integrating activities across a firm, implementing best practices for each business process, standardizing processes within organizations, creating one-source data that results in less confusion and error, and providing online access to information (O’Leary, 2000). All of these features facilitate better organizational planning, communication, and collaboration (Olson, 2004). Applied Robotics increased on-time deliveries 40% after implementing ERP, and Delta Electronics reduced production control labor requirements by 65% (Weil, 1999). Therefore, in the last decade, organizations around the world have made huge investments in enterprise systems. According to a report by Advanced Manufacturing Research, the ERP software market was expected to grow from $21 billion in 2002 to $31 billion in 2006, and the entire enterprise applications market which includes customer relationships management and supply chain management software will top $70 billion (AMR Research, 2002). It was estimated that in the 1990s about $300 billion was invested in ERP systems worldwide and that this was expected to grow to $79 billion annually by 2004 (Carlino, Nelson, & Smith, 2000). Enterprise systems include enterprise resource planning, customer relationship management (CRM), supply chain management (SCM), product lifecycle management (PLM), and e-procurement software (Shang & Seddon, 2002). ERP software integrates management information and processes, such as financial, manufacturing, distribution, and human resources, for the purpose of enabling enterprise-wide management of resources (Davenport, 1998; Deloitte Consulting, 1998; Klaus, Rosemann, & Gable, 2000). ERP helps organizations to meet the challenges of globalization with a comprehensive, integrated application suite that includes next-generation analytics, human capital management, financials, operations, and corporate services. With support for industry-specific best practices, ERP helps organizations improve productivity, sense
1280
and respond to market changes, and implement new business strategies to develop and maintain a competitive edge. ERP is designed to help businesses succeed in the global marketplace by supporting international legal and financial compliance issues, and enabling organizations to adapt internal operations and business processes to meet country-specific needs. As a result, organizations can focus on improving productivity and serving their customers instead of struggling to ensure they are in compliance with business and legal requirements around the world. Companies that automate and streamline workflows across multiple sites (including suppliers, partners, and manufacturing sites) produced 66% more improvement in reducing total time from order to delivery, according to Aberdeen’s 2007 study of the role of ERP in globalization. Those companies that coordinate and collaborate between multiple sites, operating as a vertically integrated organization, have achieved more than a 10% gain in global market share. The majority of companies studied (79%) view global markets as a growth opportunity, but of those companies, half are also feeling pressures to reduce costs (Jutras, 2007). Those companies that coordinate and collaborate between multiple sites, operating as vertically integrated organizations, have achieved more than a 10% gain in global market share (Marketwire, 2007).
bUsINEss bENEFIts FrOM ErP systEMs With the growing proliferation of ERP systems, including midsize companies, it becomes critical to address why and under what circumstances one can realize the benefits of an ERP system (Gefen & Ragowsky, 2005). ERP systems can provide the organization with competitive advantage through improved business performance (Hitt, Wu, & Zhou, 2002; Kalling, 2003) by, among other things, integrating supply chain management, receiving, inventory management, customer orders manage-
Achieving Business Benefits from ERP Systems
ment, production planning and managing, shipping, accounting, human resource management, and all other activities that take place in a modern business (Gefen & Ridings, 2002; Hong & Kim, 2002; Kalling, 2003). Thus, business benefits from ERP systems use are multidimensional, ranging from operational improvements through decisionmaking enhancement to support for strategic goals (Davenport, 2000; Deloitte Consulting, 1998; Markus & Tanis, 2000; Ross & Vitale, 2000; Irani & Love, 2001; Wilderman, 1999; Cooke & Peterson, 1998; Campbell, 1998). Gartner Group (1998) also mentions enterprise systems benefits in these areas, including both tangible and intangible benefits. While some companies claim to have reduced their cycle time, improved their financial management, and obtained information faster through ERP systems, in general they still have a high initial implementation failure rate (Hong & Kim, 2002; Songini, 2004). Many prior studies examining the relationship between investing in IT and the performance level of the organization (Weil, 1999) dealt with the ratio of total IT investment (i.e., software, hardware, personnel) to the entire organization’s performance (the total profit of the organization). Many early studies found no positive relationship between the two variables. Strassmann (1985) examined service-sector firms but found no significant relationship between investment in IT and high-performing firms. Berndt and Morrison (1992) even found a negative relationship between the growth in productivity and investment in high-tech, although, as they point out, this may have been the result of measurement problems. As Brynjolfsson (1993) summarizes, positive returns from investing in IT may not have shown up in previous research because of the inadequate way it was measured. When measuring IT investment on a per-user basis, there is a positive correlation between IT investment and overall productivity (Brynjolfsson, 2003). Although there is a large variance among companies in the benefit they achieve from their IT investment (Brynjolfsson,
2003), on average there is a $10 gain in company valuation for each dollar invested in IT (Brynjolfsson et al., 2002). Showing such a positive relationship is important because it affects MIS funding. According to Parker and Benson (1988), in order for enterprise to gain competitive advantage, the way in which IT is justified financially must change. Classical quantitative techniques (e.g., cost-benefit analysis) are not adequate for the evaluation of IT applications, except when dealing with cost-avoidance issues, which generally occur at the operational level. If these methodologies are to be enhanced, additional measures—such as perceived value to the business, increased customer satisfaction, and the utility of IT in supporting decision making—must be considered (Katz, 1993). Investment in ERP systems payoff remains a controversial question (Hitt et al., 2002; Sarkis & Sundarraj, 2003; Kalling, 2003). ERP systems are very complicated software packages that support the entire set of organizational activities. Hence, it is possible that there are many unknown factors that impact the relationship between investment in ERP and organizational productivity. This chapter observes the manager’s perception of the benefits their organization gains from using ERP systems and what impacts this benefits. ERP system investments are strategic in nature, with the key goal often being to help a company grow in sales, reduce production lead time, and improve customer service (Steadman, 1999). In IT, evaluation costs are hard to quantify in postimplementation audits and benefits are harder to identify and quantify (Hochstrasser & Griffiths, 1991; Willcocks & Lester, 1999; Irani, Sharif, & Love, 2001; Seddon, Graeser, & Willcocks, 2002). Management of organizations that adopt ERP expects many benefits from the systems. These expectations are often difficult to meet. ERP can be seen to provide more responsive information to management. There also is more interaction across the organization and more efficient financial operation (Olson, 2004). There is weaker perceived benefit from operational performance,
1281
Achieving Business Benefits from ERP Systems
such as improved operating efficiency, inventory management, and cash management. While more information is available at higher quality, this does not directly translate to cost efficiencies across the board. SAP-ERP delivers business benefits where they matter most—to the bottom line—and addresses the internal and external business requirements of global enterprise. Organizations can invest in mySAP ERP with confidence that expansion or change in any country or division will be supported. The solution provides global businesses with concrete benefits that enable success, including the following (My SAP ERP, 2007): • • • •
Improved productivity for greater efficiency and responsiveness Increased insight for more assured decision making Advanced flexibility and adaptability to cut costs and speed change A partner for long-term growth
In addition, businesses can reduce the costs associated with compliance and administration, in part by creating flexible processes that balance global demands with local needs and that can be adapted quickly as regulations change. Comprehensive financial and reporting features ensure that globally consolidated financial reports can be generated quickly. Support for internal controls improves financial management and reduces the risk of noncompliance. In conjunction with the Collaboration Folders (cFolders) application, employees can work in seamless virtual project teams with other departments, partners, and suppliers around the world. Analytical capabilities help organizations improve strategic insight and performance through better identification of global market opportunities and drivers (My SAP ERP, 2007). Benefits perceived from adopting an ERP system were studied by Mabert, Soni, and Venkataraman (2000) in Midwestern U.S. manufac-
1282
turing, and replicated in Sweden by Olhager and Selldin (2003). Both studies used a 1 to 5 scale, with 1 representing “not at all” and 5 representing “to a great extent.” Average ratings are given in Tables 1 and 2. Here, the results are very similar. ERP systems were credited with making information more available, at higher quality, and with integrating operations (Olson, 2004). There was neutral support for crediting ERP with providing benefits in Table 1. Expected benefits of ERP systems (Mabert et al., 2000; Olhager & Selldin, 2003) ERP Performance Outcomes
United States
Sweden
Quicker information response time
3.51
3.81
Increased interaction across the enterprise
3.49
3.55
Improved order management/order cycle
3.25
3.37
Decreased financial close cycle
3.17
3.36
Improved interaction with customers
2.92
2.87
Improved on-time delivery
2.83
2.82
Improved interaction with suppliers
2.81
2.78
Lowered inventory levels
2.70
2.60
Improved cash management
2.64
2.54
Reduced direct operating costs
2.32
2.74
Table 2. Areas benefiting from ERP systems Area
United States
Sweden
Availability of information
3.77
3.74
Integration of business operations/ process
3.61
3.42
Quality of information
3.37
3.31
Inventory management
3.18
2.99
Financial management
3.11
2.98
Supplier management/procurement
2.99
2.94
Customer responsiveness/flexibility
2.67
2.95
Decreased information technology costs
2.06
2.05
Personnel management
1.94
2.06
Achieving Business Benefits from ERP Systems
specific materials management and financial functions. The ratings of support for customer response and personnel management were quite low (although the Swedish rating for customer response was very close to neutral). Interestingly, both surveys found low support for crediting ERP systems with decreasing information technology costs. With an ERP, an organization can better negotiate with suppliers and reduce the cost of raw materials by as much as 15% (Schlack, 1992). Hence, the higher the cost of raw material, the higher the value of raw material cost reduction out of the cost of the product.
•
In Table 3, the first three categories are based on Anthony’s (1965) much cited work on planning and control systems. Many IS benefit analyses and frameworks have been organized around Anthony’s trinity of operational, managerial and strategic levels of management. One example is Shang and Seddon (2000): •
ErP bENEFIts FrAMEWOrK Shang and Seddon (2000) provided a comprehensive framework of the benefits of ERP systems. In their survey of 233 vendor success stories and 34 follow-up phone interviews from three major ERP vendor Web sites, they found that all organizations derived benefit from at least two of the five categories, and all the vendors’ products had returned customer benefit in all five categories. In the beginning of 1997 during the reengineering process, most of the multinational organizations perceived the following benefits of implementing ERP system: • • • • • • • • •
Common processes across the globe Centralized operations Multi-language and currency capabilities Better tracking of inventory Improved utilization of raw materials Tighter integration of production with sales and distribution Tax advantages through improved asset management Removal of a number of existing legacy systems Improved development and support environment
Real-time functional system enhancement capability
•
•
•
•
•
Weil (1990) evaluated the payoff from three types of IS investment—in transactional, informational, and strategic systems—in the U.S. valve industry. He found that the greatest benefits came from investment in transactional level IT. Gorry and Scott Morton (1971) and others (Silver, 1990; Demmel & Askin, 1992) reported significant benefits from using IT for managerial decision suport. Porter and Miller (1985) and others (McFarlan, 1984; Rackoff, Wiseman, & Ullrich, 1985; Clemons, 1991; Venkataraman, Henderson, & Oldach 1993) noted significant benefits from the use of IT in pursuing strategic goals. Mirani and Lederer (1998) adapted Anthony’s framework to build an instrument for assessing the organizational benefits of IS projects. Hicks (1997), Reynolds (1992), and Schultheis and Sumner (1989) also used Anthony’s categories in classifying IT benefits as operational, tactical, and strategic. The categories were also used as frameworks for analyzing the benefits of general and enterprise-wide information systems (Wysocki & DeMichiell, 1997; Irani & Love, 2001). Willcocks (1994) and Graeser, Willcocks, and Pisanias (1998) adapted Kaplan and Norton’s (1996) balanced scorecard approach in assessing IS investment in finan-
1283
Achieving Business Benefits from ERP Systems
Table 3. ERP benefits framework and extent of tangibility and quantifiability (adapted from Shang & Seddon, 2000) Dimension/Sub-Dimensions
Tangible
Quantifiable
1. Operational 1.1 Cost reduction
Full
Full
1.2 Cycle time deduction
Most
Full
1.3 Productivity improvement
Most
Full
1.4 Quality improvement
Some
Most
Some
Most
2.1 Better resource management
Some
Most
2.2 Improved decision making and planning
Some
Some
2.3 Performance improvement
Most
Most
3.1 Support business growth
Some
Full
3.2 Support business alliance
Low
Most
3.3 Build business innovations
Some
Some
3.4 Build cost leadership
Some
Some
3.5 Generate product differentiation
Some
Low
Low
Some
4.1 Build business flexibility for current and future changes
Low
Low
4.2 IT costs reduction
Full
Full
4.3 Increased IT infrastructure capability
Some
Some
5.1 Support organizational changes
Low
Low
5.2 Facilitate business learning
Low
Low
5.3 Empowerment
Low
Low
5.4 Build common visions
Low
Low
1.5 Customer services improvement 2. Managerial
3. Strategic
3.6 Build external linkages 4. IT infrastructure
5. Organizational
cial, project, process, customer, learning, and technical aspects, and measured organizational performance along Anthony’s three levels of business practice. Therefore, there are very strong precedents in the IS literature for attempting to classify the benefits of enterprise systems in terms of organizational performance along Anthony’s three levels of business practice.(Shang & Seddon, 2000).
1284
INtANGIbLE bENEFIts IN It AND ErP PrOJEcts Webster (1994) defines a tangible item as “something that is capable of being appraised at an actual or approximate value.” But the ‘value’ is monetary worth, or some other measure like customer satisfaction is not certain. According to Hares and Royle (1994), “an intangible is anything that is difficult to measure,” and the boundary between tangible and intangible is fuzzy at best. Determining the
Achieving Business Benefits from ERP Systems
intangible benefits derived from information systems implementation has been an elusive goal of academics and practitioners alike (Davern & Kauffman, 2000). Remenyi and Sherwood-Smith (1999) pointed out that there are seven key ways in which information systems may deliver direct benefits to organizations. They also indicated that information systems deliver intangible benefits that are not easily assessed. Nandish and Irani (1999) discussed the difficulty of evaluating IT projects in the dynamic environment, especially when intangibles are involved in the evaluation. Tallon, Kraemer, and Gurbaxni (2000) cited a number of studies indicating that economic and financial measures fail to assess accurately the payoff of IT projects and suggested that one means of determining value is through the perception of executives. They focused on the strategic fit and the contributions of IT projects, but indicated that researchers need somehow to capture or represent better the intangible benefits of IT. In the technology arena, as in the business areas, many projects deliver benefits that cannot be easily quantified (Murphy & Simon, 2002). Many benefits related to the information technology projects cannot be easily quantified, for example better information access, improved workflow, interdepartmental coordination, and increased customer satisfaction (Emigh, 1999). These are also the features that are listed as key attributes of ERP systems (Mullin, 1999; Davenport, 2000). ERP systems are implemented to integrate transactions along and between business processes. Common business processes include order fulfillment, materials management, production planning and execution, procurement, and human resources (Murphy & Simon, 2002). ERP systems enable efficient and error-free workflow management and accounting processes including in-depth auditing. These systems feature a single database to eliminate redundancy and multiple entry errors, and they provide in-depth reporting functionality. ERP systems provide information for effective decision making on all organizational levels (Murphy
& Simon, 2002). According to Hares and Royle (1994), there are four main intangible benefits in IT investment: 1. 2. 3. 4.
Internal improvement: This includes processes, workflow, and information access. Customer service: This ensures quality, delivery, and support. Foresight: This is vision regarding markets, products, and acquisitions in the future. Adaptability: This is the ability to adapt change in rapidly changing industry.
The third and fourth sets of intangibles are future oriented and include spotting market trends and the ability to adapt to change. Hares and Royle (1994) stated that the first set of ongoing intangible benefits are those concerned with internal improvement of company operations or performance. These include changes in production processes, methods of operations management, and changes to production value and process chains with resulting benefits in increased output or lower production costs. The second group of ongoing benefits, customer-oriented intangibles, is more difficult to measure because their effectiveness is determined by external forces. The benefits of improving customer service are greater retention of customers and customer satisfaction. The third group of intangibles embodies the spotting of new market trends. If new trends can be anticipated, then technology may be able to transform or create products, processes, or services to gain new sales and market position. The final group of intangible benefits is the ability to adapt to change. As with the identification of market trends, the benefits derived include adapting products and services to market trends and the modification of production processes—a critical ability for firms in rapidly changing industries. ERP system investments are strategic in nature, with the key goal often being to help a company grow in sales, reduce production lead time, and improve customer service (Steadman, 1999). Or-
1285
Achieving Business Benefits from ERP Systems
ganizations turned up an average value of -$1.5 million when quantifiable cost savings and revenue gains were calculated against system implementation and maintenance costs. Improved customer service and related intangible benefits such as updated and streamlined technical infrastructure are important intangible benefits that organizations are often seeking when making these investments. The development and implementation of ERP systems is longer in duration and cost intensive. It is difficult to quantify in monetary terms because of the intangible nature of many of the derived benefits, for example, improved customer service (Murphy & Simon, 2002). The literature suggests that intangibles can be converted into monetary terms through the ability to take care of the following observations: 1. 2. 3. 4.
Maintain and increase sales Increase prices Reduce costs Create new business
Hares and Royle (1994) give a procedure to quantify intangible benefits. The major steps are: 1. 2. 3. 4.
Identify benefits to be quantified Make intangible benefits measurable Predict the benefits in physical terms Evaluate cash flow terms
ENtErPrIsE systEM bENEFIt FrAMEWOrK According to Shang and Seddon (2002), the following five-dimensional framework, which is built on a large body of previous research into IT benefits, has been organized around operational efficiency and managerial and strategic effectiveness, as the outlook of strategic managers are too broad to identify casual links between enterprise system investment and benefit realization, and those of operational managers are too narrow to consider
1286
all relevant organizational goals. The most appropriate management level is business managers (middle-level management control), as they have a comprehensive understanding of both the capabilities of ES and the business plans for system use. It is not expected that all organizations will achieve benefits in all 25 sub-dimensions, or even in all five main dimensions, but it provides an excellent checklist of benefits that have been accomplished in organizations using enterprise systems. 1.
Operational Benefits: 1.1 Cost reduction: ▪ Labor cost reduction in customer service, finance, human resources, purchasing, IT services, and training. ▪ Inventory cost in inventory turns, dislocation costs, and warehousing costs. ▪ Administrative expenses reduction in printing and other business supplies. 1.2 Cycle time reduction: ▪ Customer support activities in order fulfillment, billing, delivery, and customer enquiries. ▪ Employee support activities in month-end closing, requisition, HR and payroll, and learning. ▪ Supplier support activities in order processing, information exchange, and payment. 1.3 Productivity improvement: ▪ Production per employee, production by labor hours, production by labor costs, increased work volume with same workforce, and reduced overtime. 1.4 Quality improvement: ▪ Error rate, data reliability to data accuracy. 1.5 Customer service improvement: ▪ Ease of data access and inquiries.
Achieving Business Benefits from ERP Systems
2.
Managerial Benefits 2.1 Better resource management: ▪ Better asset management for improved cost, depreciation, relocation, custody, physical inventory, and maintenance records control, both locally and worldwide. ▪ Better inventory management in shifting products where they were needed and responding quickly to surges or dips in demand. Managers are able to see the inventory of all locations in their region or across boundaries, making possible a leaner inventory. ▪ Better production management for coordinating supply and demand, and meeting production schedules at the lowest cost. ▪ Better workforce management for improved workforce allocation and better utilization of skills. 2.2 Improved decision making and planning: ▪ Improved strategic decisions for greater market responsiveness, fast profit analysis, tighter cost control, and effective strategic planning. ▪ Improved management decisions for flexible resource management, efficient processes, and quick response to operational changes. ▪ Improved customer decisions with flexible customer services, rapid response to customer demands, and prompt service adjustments.
3.
2.3 Performance improvement in a variety of ways in all levels of the organization: ▪ Financial performance by lines of business, by product, by customers, by geographies, or by different combinations. ▪ Manufacturing performance monitoring, prediction, and quick adjustments. ▪ Overall operational efficiency and effectiveness management. Strategic Benefits
Strategic benefits are in a wide spectrum of activities in internal and external areas in terms of general competitiveness, product strategies, strategic capabilities, and competitive position of the organization. 3.1 Support business growth: ◦ In transaction volume, processing capacity and capability. ◦ With new business units. ◦ In products or services, new divisions, or new functions in different regions. ◦ With increased employees, new policies and procedures. ◦ In new markets. ◦ With industry’s rapid changes in competition, regulation, and markets. 3.2 Support business alliance by: ◦ Efficiently and effectively consolidating newly acquired companies into standard business practice. ◦ Building consistent IT architecture support in different business units. ◦ Changing selling models of new products developed by a merged company. ◦ Transiting new business units to a corporate system.
1287
Achieving Business Benefits from ERP Systems
◦
Integrating resources with acquired companies. 3.3 Building business innovation by: ◦ Enabling new market strategy. ◦ Building new process chains. ◦ Creating new products or services. 3.4 Building cost leadership by: ◦ Building a lean structure with streamlined processes. ◦ Reaching business economies of scale in operation. ◦ Shared services. 3.5 Generating product differentiation by: ◦ Providing customized product or services, such as early preparation for the new EMU currency policy, customized billing, individualized project services to different customer requirements, and different levels of service appropriate for various sizes of customer organizations. ◦ Providing lean production with maketo-order capabilities. 3.6 Enabling worldwide expansion: ◦ Centralized world operation. ◦ Global resource management. ◦ Multicurrency capability. ◦ Global market penetration. ◦ Cost-effective worldwide solution deployment. 3.7 Enabling e-commerce by attracting new customers or getting closer to customers through the Web integration capability. The Web-enabled ES provides benefits in business to business and business to individual in: ◦ Interactive customer service. ◦ Improved product design through customer direct feedback. ◦ Expanding to new markets. ◦ Building virtual corporations with virtual supply and demand consortia. ◦ Delivering customized service. ◦ Providing real-time and reliable data enquiries.
1288
3.8 Generating or sustaining competitiveness: ◦ Maintaining competitive efficiency. ◦ Building competitive advantage with quick decision making. ◦ Staying ahead of competitors for better internal business support. ◦ Using opportunities generated by enterprise systems to pull abreast of world leaders by using the same software and being compatible with customers. 4. IT Infrastructure Benefits 4.1 Building business flexibility by rapid response to internal and external changes at lower cost and providing a range of options in reacting to changing requirements. 4.2 IT cost reduction in: ▪ Total cost of maintaining and integrating legacy systems by eliminating separate data centers and applications, as well as their supporting costs. ▪ IT staff reductions. ▪ Mainframe or hardware replacement. ▪ System architecture design and development. ▪ System upgrade maintenance. ▪ System modification and future changes. ▪ Technology research and development. 4.3 Increase IT infrastructure capability: Stable and flexible support for the current and future business changes in process and structure. Stability: • • •
Reliable platforms. Global platforms with global knowledge pipeline. Transformed IS management and increased IS resource capability.
Achieving Business Benefits from ERP Systems
•
Continuous improvement in process and technology. Flexibility:
• • • •
Modern technology adaptability. Extendibility to external parties. Expandability to a range of applications. Customizable and configurable.
5.
Organizational Benefits
Organizational benefits can be evaluated in individual attitudes, employee morale and motivation, and interpersonal interactions. 5.1 Changing work pattern with shifted focus: ◦ Coordination between different interdisciplinary matters. ◦ Harmonization of interdepartmental processes. 5.2 Facilitating business learning and broaden employee skills: ◦ Learned by entire workforce. ◦ Shortened learning time. ◦ Broadened employee skills. ◦ Employees with motivation to learn the process. 5.3 Empowerment: ◦ Accountability, more value-added responsibility. ◦ More proactive users in problem solving, transformed from doers to planners. ◦ Working autonomously. ◦ Users with ownership of the system. ◦ Greater employee involvement in business management. 5.4 Building common visions: ◦ Acting as one and working as a common unit. ◦ Consistent vision across different levels of organizations.
5.5 Shifting work focus: ◦ Concentrate on core work. ◦ Focus on customer and market. ◦ Focus on business process. ◦ Focus on overall performance. 5.6 Increased employee morale and satisfaction: ◦ Satisfied users with better decisionmaking tools. ◦ Satisfied users with increased work efficiency. ◦ Satisfied users in solving problems efficiently. ◦ Satisfied users in increased system skills and business knowledge. ◦ Increased morale with better business performance. ◦ Satisfied employees for better employee service.
DIscUssION The above benefits were reported by all selected cases as mentioned by Shang and Seddon (2002); also, examples of each benefit dimension were found in cases from each ES vendor. Every business achieved benefits in at least two dimensions. Operational and infrastructure benefits were the most quoted benefits: 170 cases (73% of 233) claimed to have achieved operational benefits, and 194 cases (83%) claimed IT infrastructure benefits (Shang & Seddon, 2002). Operational benefits such as cost, speed, and error rates are measurable in many cases. Managerial benefits, although less tangible, are linked directly with information used at different decision-making levels and with different resources. The most useful information on both these dimensions was provided by business managers or process owners, who had a clearer picture of the impact of the adoption of ES on the overall organization, including their own and their colleagues’ decision making. Strategic benefits appear to flow from
1289
Achieving Business Benefits from ERP Systems
a broad range of activities in internal and external areas, and are described in terms of general competitiveness, product strategies, and other strategic capabilities. Organizational benefits are mainly reflected in individual attitudes (e.g., employee morale) and interpersonal interactions. Operational benefits may come with increased managerial effectiveness, strategic benefits rely on operational efficiency, and organizational benefits can be realized in parallel with managerial benefits (Shang & Seddon, 2002). Regardless of tangible or intangible benefits, it is progressively more difficult to measure managerial, organizational, and strategic benefits than infrastructure or operational benefits; this has been an issue of debate since information systems advanced beyond transaction processing systems (Murphy & Simon, 2002). With ERP systems, success has been determined based on the organization’s acceptance of the changes that the system introduces. Further, Murphy and Simon (2002) observed that organizational and managerial classification benefits are not only the most difficult to obtain, but also are the hardest to quantify.
FUtUrE rEsEArcH DIrEctIONs Empirical studies of ERP benefits assessments in different organizations and their comparisons might be an interesting area for further work in this direction. Furthermore, assessment of ERP benefits can be performed at two levels: first at an enterprise level, where the entire ERP system can be assessed regarding different types of benefits derived from the ERP; and second at a specific module (application) level, which offers interesting areas for future research. Future research efforts should focus on managerial, organizational, and strategic benefits, which are still unexplored in terms of intangible benefits measurement to quantify.
1290
cONcLUsION Assessing whether investment in enterprise systems pays off is an important issue. Organizations can achieve a number of tangible and intangible benefits due to successful implementation of ERP systems. These benefits can be derived globally, and in the context of globalization it is important to understand an organization’s managerial people and shareholders as well. ERP helps organizations meet the challenges of globalization with a comprehensive, integrated application suite that includes next-generation analytics, human capital management, financials, operations, and corporate services. ERP is designed to help businesses succeed in the global marketplace by supporting international legal and financial compliance issues, and enabling organizations to adapt internal operations and business processes to meet countryspecific needs. This will be helpful for decision makers (managerial people) of organizations to evaluate various available ERPs in acquisition and implementation. This will also a further aid managers in assessing the benefits of their existing ERPs in the organization in a more objective way all over the world .
rEFErENcEs Anthony, R. N. (1965). Planning and control systems: A framework for analysis. Graduate School of Business Administration, Harvard University, USA. Berndt, E. R., & Morrison, C. J. (1992). High-tech capital formation and economic performance in U.S. manufacturing: An exploratory analysis. Economics, Finance and Accounting Working Paper #3419, Sloan School of Management, Massachusetts Institute of Technology, USA. Brynjolfsson, E. (1993). The productivity paradox of information technology. Communications of the ACM, 36(12), 67–77. doi:10.1145/163298.163309
Achieving Business Benefits from ERP Systems
Campbell, S. (1998). Mining for profit in ERP software. Computer Reseller News, (October), 19. Carlino, J., Nelson, S., & Smith, N. (2000). AMR Research predicts enterprise applications market will reach $78 billion by 2004. Retrieved from http://www.amrresearch.com/press/files/99518. asp Clemons, E. K. (1991). Evaluation of strategic investments in information technology. Communications of the ACM, 34, 22–36. doi:10.1145/99977.99985 Cooke, D. P., & Peterson, W. J. (1998, July). SAP implementation: Strategies and results. R-121798-RR, The Conference Board, New York, USA. Davenport, T. H. (1998). Putting the enterprise into the enterprise system. Harvard Business Review, (July/August): 121–131. Davenport, T. H. (2000). Mission critical—Realizing the promise of enterprise systems. Boston: Harvard Business School. Davern, M. J., & Kauffman, R. J. (2000). Discovering potential and realizing value from information technology investments. Journal of Information Management Information Systems, 16, 121–143. Deloitte Consulting. (1998). ERP’s second wave— Maximizing the value of ERP-enabled processes. New York. Demmel, J., & Askin, R. (1992). A multipleobjective decision model for the evaluation of advanced manufacturing systems technology. Journal of Manufacturing Systems, 11, 179–194. doi:10.1016/0278-6125(92)90004-Y Emigh, J. (1999). Net present value. Computerworld, 33, 52–53. Gartner Group. (1998). 1998 ERP and FMIS study—Executive summary. Stamford, CT.
Gefen, D., & Ragowsky, A. (2005). A multi-level approach to measuring the benefits of an ERP system in manufacturing firms. Information Systems Management, (Winter): 18–25. doi:10.1201/107 8/44912.22.1.20051201/85735.3 Gefen, D., & Ridings, C. (2002). Implementation team responsiveness and user evaluation of CRM: A quasi-experimental design study of social exchange theory. Journal of Management Information Systems, 19(1), 47–63. Gorry, A., & Scott Morton, M. S. (1971). A framework for management information systems. Sloan Management Review, 13, 49–61. Graeser, V., Willcocks, L., & Pisanias, N. (1996). Developing the IT scorecard. London: Business Intelligence. Hares, J., & Royle, D. (1994). Measuring the value of information technology, 7, 109-122. Hicks, J. O. (1997). Management information systems: A user perspective. Minneapolis/St. Paul: West. Hitt, L. M., Wu, D. J., & Zhou, X. (2002). ERP investment: Business impact and productivity measures. Journal of Management Information Systems, 19(1), 71–98. doi:10.1201/1078/43199 .19.1.20020101/31479.10 Hochstrasser, B., & Griffiths, C. (1991). Controlling IT investment: Strategy and management. London: Chapman & Hall. Hong, K., & Kim, Y. (2002). The critical success factors for ERP implementation: An organizational fit perspective. Information & Management, 40(1), 25–40. doi:10.1016/S0378-7206(01)00134-3 Irani, Z., & Love, P. E. D. (2001). The propagation of technology management taxonomies for evaluating investments in information systems. Journal of Management Information Systems, 17, 161–178.
1291
Achieving Business Benefits from ERP Systems
Irani, Z., Sharif, A. M., & Love, P. E. D. (2001). Transforming failure into success through organisational learning: An analysis of a manufacturing information system. European Journal of Information Systems, 10, 55–66. doi:10.1057/palgrave. ejis.3000384 Jutras, C. (2007). The role of ERP in globalization. Retrieved from http://www.aberdeen.com/ summary/report/benchmark/RA_ERPRoleinGlobalization_CJ_3906.asp Kalling, T. (2003). ERP systems and the strategic management processes that lead to competitive advantage. Information Resources Management Journal, 16(4), 46–67. Kaplan, R., & Norton, D. P. (1996). Using the balance scorecard as a strategic management system. Harvard Business Review, (January/ February): 75–85. Katz, A. I. (1993). Measuring technology’s business value: Organizations seek to prove IT benefits. Information Systems Management, 10, 33–39. doi:10.1080/10580539308906910 Klaus, H., Rosemann, M., & Gable, G. G. (2000). What is ERP? Information Systems Frontiers, 2, 141–162. doi:10.1023/A:1026543906354 Mabert, V. M., Soni, A., & Venkataraman, N. (2000). Enterprise resource planning survey of U.S. manufacturing firms. Production and Inventory Management Journal, 41(2), 52–58. Marketwire. (2007). Thinking global? Don’t lose sight of profitable growth. Retrieved from http://www.marketwire.com/mw/release_html_ b1?release_id=224493 Markus, L. M., & Tanis, C. (2000). The enterprise systems experience—From adoption to success. In R.W. Zmud (Ed.), Framing the domains of IT research: Glimpsing the future through the past. Cincinnati, OH: Pinnaflex Educational Resources.
1292
McFarlan, F. W. (1984). Information technology changes the way you compete. Harvard Business Review, (May/June): 98–103. Mirani, R., & Lederer, A. L. (1998). An instrument for assessing the organizational benefits of IS project. Decision Sciences, 29, 803–838. doi:10.1111/j.1540-5915.1998.tb00878.x MIT. (n.d.). Economic performance in U.S. manufacturing: An exploratory analysis. Economics, Finance and Accounting Working Paper #3419, Sloan School of Management, Massachusetts Institute of Technology, USA. Mullin, R. (1999). ERP users say payback is passé. Chemical Week, 161, 25–26. Murphy, K. E., & Simon, S. J. (2002). Intangible benefits valuation in ERP projects. Information Systems Journal, 12, 301–320. doi:10.1046/ j.1365-2575.2002.00131.x MySAP ERP. (2007). Globalization with localization. Retrieved from www.sap.com/usa/solutions/ grc/pdf/BWP_mySAP_ERP_Global_Local.pdf Nandish, V. P., & Irani, Z. (1999). Evaluating information technology in dynamic environments: A focus on tailorable information. Logistics Information Management, 12, 32. doi:10.1108/09576059910256231 O’Leary, D. E. (2000). Enterprise resource planning systems: Systems, life cycle, electronic commerce, and risk. Cambridge: Cambridge University Press. Olhager, J., & Selldin, E. (2003). Enterprise resource planning survey of Swedish manufacturing firms. European Journal of Operational Research, 146, 365–373. doi:10.1016/S03772217(02)00555-6 Olson, D. L. (2004). Managerial issues of enterprise resource planning systems. McGraw-Hill International.
Achieving Business Benefits from ERP Systems
Parker, M., & Benson, R. (1988). Information economics: Linking business performance to information technology. London: Prentice Hall. Porter, M. E., & Miller, V. E. (1985). How information gives you competitive advantage. Harvard Business Review, 63, 149–160. Rackoff, N., Wiseman, C., & Ullrich, W. A. (1985). Information systems for competitive advantage: Implementation of a planning process. MIS Quarterly, 9, 285–294. doi:10.2307/249229 Remenyi, D., & Sherwood-Smith, M. (1999). Maximize information systems value by continuous participative evaluation. Logistics Information Management, 12, 14–25. doi:10.1108/09576059910256222 Research, A. M. R. (2002). AMR Research predicts enterprise applications market will reach $70 billion in 2006. Retrieved from http:www. amrresearch.com Reynolds, G. W. (1992). Information systems for managers. Minneapolis/St. Paul, MN: West. Ross, J. W., & Vitale, M. (2000). The ERP revolution. Surviving versus thriving. Information Systems Frontiers, 2(2), 233–241. doi:10.1023/A:1026500224101 Sarkis, J., & Sundarraj, R. P. (2003). Managing large scale global enterprise resource planning systems: A case study at Texas Instruments. International Journal of Information Management, 23(5), 431–442. doi:10.1016/S0268-4012(03)00070-7 Schlack, M. (1992). IS has a new job in manufacturing. Datamation, (January 15), 38-40. Schultheis, R., & Sumner, M. (1989). Management information systems: The manager’s view. Boston: Irwin.
Seddon, P. B., Graeser, V., & Willcocks, L. (2002). Measuring organizational IS effectiveness: An overview and update of senior management perspectives. The Data Base for Advances in Information Systems, 33, 11–28. Shang, S., & Seddon, P. B. (2002). Assessing and managing the benefits of enterprise systems: The business manager’s perspective. Information Systems Journal, 12, 271–299. doi:10.1046/j.13652575.2002.00132.x Shang, S., & Seddon, S. (2000). A comprehensive framework for classifying the benefits of ERP systems. Proceedings of Americas Conference on Information Systems. Silver, M. (1990). Decision support systems: Direct and non-directed changes. Information Systems Research, 1, 47–88. doi:10.1287/isre.1.1.47 Songini, M. C. (2004, August 23). Ford abandons Oracle procurement systems, switches back to mainframe apps. Retrieved from http:// www.computerworld.com/softwaretopics/erp/ story/0.10801,95404,00.html Steadman, C. (1999). Calculating ROI. Computerworld, 33, 6. Stewart, G., Milford, M., Jewels, T., Hunter, T., & Hunter, B. (2000). Organizational readiness for ERP implementation. In Proceedings of the Americas Conference on Information Systems (pp. 966-971). Strassmann, P. (1985). Information payoff: The transformation of work in the electronic age. London: Collier Macmillan. Tallon, P., Kraemer, K., & Gurbaxni, V. (2000). Executives’ perception of business value of information technology: A process-oriented approach. Journal of Management Information Systems, 16(4), 145–173.
1293
Achieving Business Benefits from ERP Systems
Venkataraman, N., Henderson, J., & Oldach, S. H. (1993). Continuous strategic alignment: Exploiting IT capabilities for competitive success. European Management Journal, 11, 139–149. doi:10.1016/0263-2373(93)90037-I Weil, M. (1999). Managing to win. Manufacturing Systems, 17(November), 14. Wilderman, B. (1999). Enterprise resource management solutions and their value. Stanford, CT: MetaGroup.
ADDItIONAL rEADING Bingi, P., Sharma, M. K., & Godla, J. K. (1999). Critical issues affecting an ERP implementation. Information Systems Management, (Summer): 121–131. Litecky, C. R. (1981). Intangibles in cost/benefit analysis. Journal of Systems Management, 32, 15–17.
Willcocks, L. (Ed.). (1994). Information management: Evaluation of information systems investments. London: Chapman & Hall.
Motwani, J., Mirchandani, D., Madan, M., & Gunasekaran, A. (2002). Successful implementation of ERP projects: Evidence from two case studies. International Journal of Production Economics, 75.
Willcocks, L. P., & Lester, S. (Eds.). (1999). Beyond the IT productivity paradox. Chichester: John Wiley & Sons.
Olson, D. L. (2004). Managerial issues of enterprise resource planning systems (international ed.). Singapore: McGraw-Hill.
Wysocki, R., & DeMichiell, R. L. (1997). Managing information across the enterprise. New York: John Wiley & Sons.
Simms, J. (1997). Evaluating IT: Where costbenefit can fail. Australian Accountant, (May), 29-31. van Everdingen, Y., van Hellegersberg, J., & Waarts, E. (2000). ERP adoption by European midsize companies. [t]. Communications of the ACM, 43(4), 27–31. doi:10.1145/332051.332064
This work was previously published in Enterprise Resource Planning for Global Economies: Managerial Issues and Challenges, edited by Carlos Ferran and Ricardo Salim, pp. 77-92, copyright 2008 by Information Science Reference (an imprint of IGI Global).
1294
1295
Chapter 5.8
Experiences of Cultures in Global ERP Implementation Esther Brainin Ruppin Academic Center, Israel
AbstrAct The chapter considers the complexities of cultural differences for global enterprise resource planning (ERP) implementation. An extensive review of the literature related to societal and organizational culture is followed by a delineation of the stages of ERP implementation and the actors involved in each stage, reflecting the basic assumption that global ERP systems are not universally acceptable or effective, and that testing the cross-cultural generalizability of ERP systems in organizations will produce a managerial agenda that facilitates the implementation process. The recognition and discussion of these differences can provide a stimulus for identifying and modifying the DOI: 10.4018/978-1-59904-531-3.ch010
limitations of technological implementation and use policies to improve the benefits generated by the technology. Topics of explicit concern to ERP implementation in global organizational economies related to organizational and societal culture are discussed, and suggestions for managerial mechanisms for overcoming major obstacles in this process are proposed.
INtrODUctION Enterprise resource planning (ERP) systems impose high demands on virtually all organizational members since these process-oriented technologies are designed to standardize business procedures across the enterprise. In addition to its potential benefits, the introduction of an ERP
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Experiences of Cultures in Global ERP Implementation
system into an enterprise becomes the means for achieving organizational standardization and integration. Hence, it can be viewed as an organizational “boundary crossing” channel, and as such, ERP integration processes are more than likely to face resistance to change. It is not surprising, then, that while companies worldwide have made substantial investments in the installation of ERP systems, difficulties in implementation and uncertain bottom-line benefits (Davenport, 1998; Kumar & Hillegersberg, 2000; Robey, Ross, & Boudreau, 2000) may be attributed to a failure in the implementation process (Brainin, Gilon, Meidan, & Mushkat, 2005; Klein, Conn, & Speer-Sorra, 2001) rather than a deficiency in the technology itself. The complexity of ERP implementation is exacerbated when ERP systems are globally implemented since the integrative nature of this technology calls for the crossing of national or regional, in addition to organizational boundaries. Since countries, regions, and organizations differ in their absorptive capacity (Lane, Koka, & Pathak, 2006) and their societal and organizational culture (Hall & Hall, 1990; Hofstede, 1980; Javidan, House, & Dorfman, 2004a; Trompenaars, 1996), implementation of ERP systems in global enterprises requires a detailed examination of potential gaps or inconsistencies in the interaction between new technologies, end users, and organizations in different countries and/or regions (e.g., Boerma & Kingma, 2005; Dube & Robey, 1999; Krumbholz & Maiden, 2001; Leidner & Kayworth, 2006). The approach taken here blends interpretive and positivist theories (Lee, 1991). The use of interpretive theories enables portrayal of technological systems as “cultural tools,” ascribed with different interpretations. Utilizing positivist theories opens the way for mapping cultural differences between countries, scoring countries in accordance with their societal culture characteristics, and examining the relationship between societal culture and various outcomes such as leadership style, and economic and social indicators. The
1296
power of integrative examination is in producing helpful insights to poor global technology implementation antecedents. Technology does not constitute an entity per se, and despite its standard engineering components, its preferred pattern of use can differ as a result of what Orlikowski (2000) described as the diverse attributions of meanings and power relation conceptions within and between organizations. The social construction of technological systems (Bijker, Hughes, & Pinche, 1987) as an interpretive theory maintains that different types of user groups differ in how they conceptualize, interpret, and exploit technologies and their potential. Accordingly, the effective utilization of a technology is the result of explicit and implicit “negotiations” between groups of users regarding the desired use of the technology and its organizational contribution and significance. “Technology should be treated as simultaneously social and physical and examine the interplay between the material characteristics of technology and the social context within which it is designed and deployed” (Grant, Hall, Wailes, & Wright, 2006, p. 4). In the literature on culture, technology is viewed as a “cultural artifact” (Schein, 1992), thus constituting an integral and inseparable part of organizational culture, which is reproduced in everyday working routines. Consequently, global ERP systems must not be viewed only as large offthe-shelf software solutions that provide integrated business and software systems to a customer, but also as cultural tools, mostly designed and invented by the Western world but implemented in diverse local/regional settings, all having different cultural characteristics (Davison, 2002). In fact, cultural differences may impede ERP implementation even when its diffusion occurs within Western countries. Boerma and Kingma (2005) described Nestlé’s ERP implementation as an example of a misfit between the decentralized organizational culture of Nestlé and the centralized culture imposed on the conglomerate by the adoption of the ERP system. Their example high-
Experiences of Cultures in Global ERP Implementation
lights differences in cultural attributes between a preexisting organizational culture and the new technological culture after ERP implementation. The Nestlé study is an example of how ERP software packages, implemented in different organizational contexts, force local cultures to surrender to ERP mechanisms and logic that disregard local cultures and leadership styles. Such an implementation process may result in poor and delayed exploitation due to a misfit between the cultural characteristics that are embedded in and represented through the new technology, and the target organization’s culture and its end user practices and perceptions. Accordingly, transfer of technology from one region/country to another necessitates the addition of the cultural dimension to Bijker et al.’s (1987) theory of the social construction of technology among end users. Leidner and Kayworth (2006), who reviewed 86 studies on the cultural dimensions of ERP implementation, suggested the term “technology cultural conflict” to “…lend insights into the understanding of the linkages between IT and culture” (p. 357). Cultural dimensions are entwined in technology implementation on different levels of analysis and operation (e.g., societal, organizational). This chapter outlines the various stages of ERP implementation, indicating the potential for cultural discordance or concordance at every stage. Managers and end users are defined as actors during this long process. They are likely to be involved in ERP implementation from the initial stage of strategic planning and decision making, and continue their involvement by leading and championing the implementation using goal setting, feedback, and reward techniques. Clearly, global ERP implementation positions managerial and leadership practices in a multicultural context. Past research relating societal culture to IT implementation was based on Hofstede’s pioneering work (e.g., Ford, Connelly, & Meister, 2003). However, it would be unwise to assume any aspect of reality is quantifiable by a single mea-
sure. Consequently, the findings of the GLOBE project, an impressive research effort conducted by 170 investigators from 62 countries, will be presented. This project measured culture at different levels through both practices and values, and explored the relationship between culture and societal, organizational, and leadership effectiveness (House, Hanges, Javidan, Dorfman, & Gupta, 2004). The basic assumption here is that global ERP systems are not universally acceptable or effective, and that testing the cross-cultural generalizability of ERP systems in organizations will produce a managerial agenda that facilitates the implementation process. Topics of explicit concern to ERP implementation in global organizational economies related to organizational and societal culture, and proposal for managerial mechanisms for overcoming major obstacles in this process will be addressed. The first section of this discussion provides a brief overview of the dimensions of organizational and societal culture. This is followed by a description of the stages of ERP implementation and the actors involved in the process at every stage. Building upon this review, recommendations are proposed for the relationships between ERP implementation and cultural dimensions. The conclusion summarizes the key points drawn from the analysis.
“sOcIEtAL cULtUrE” AND Its IMPAct ON OrGANIZAtIONAL cULtUrE Cultural factors take center stage in the discussion of global ERP implementation. Implementation of the same technology in different countries produces various cultural encounters that may facilitate or inhibit its exploitation (Ford et al., 2003). Thus, the concept of organizational and societal culture should be addressed. Work organizations are distinguished by social experiences that can be called “cultures.” Such experiences,
1297
Experiences of Cultures in Global ERP Implementation
however, do not necessarily represent the organization as a whole. In this sense, organizational culture is itself organized into various work settings (Frost, Moore, Louis, Lundberg, & Martin, 1985) and produces a form of locally recognized and common social knowledge, similar to the common knowledge that develops within a clan. The organizational culture is the glue holding a group of people together. It is produced over a period of time, and helps solve the group’s problems of internal integration and survival in an external environment (Schein, 1992). At the same time, on a different level of analysis, employees working in the same local societal culture share the same societal meaning system (Hofstede, 2001), which supports their adaptation to their local organization. A shared meaning system can be formed at different levels, from the micro level of the group or team, the meso level of the organization, up to the macro level of nations and beyond (Shokef & Erez, 2006). The question arises whether organizational culture is a reflection of societal culture, or organizations are culture-producers. The open system theory builds on the principle that organizations are ‘open’ to their environment, and devote a great deal of attention to understanding their immediate tasks or business environment, through direct interactions with their customers, competitors, suppliers, labor unions, and government agencies. All these stress the importance of being able to bridge and manage critical boundaries and areas of interdependence, and develop appropriate operational and strategic responses. Thus, the surrounding societal culture is an external source of influence on organizational culture through the behavior of organizational members who introduce their beliefs, norms, and values into the organization. Thus, consistent with the open systems theory, we expect to find systematic societal variations over and above within-societal differences. Yet individual organizations may also have their own unique culture: organizational culture can be examined as an independent entity
1298
that develops its own unique dimensions because of differences between tasks, expertise/occupations, and activities (Huang, Newell, Galliers, & Pan, 2003; Rousseau & Cooke, 1988; Sackmann, 1992; Trice, 1993). It is suggested that these different views be addressed as two perspectives that complement each other.
Methodological Issues related to the Examination of Organizational and societal culture Organizational culture has different layers of depth, ranging from the most visible layers of artifacts, practices, and behavior, to the less visible layers of values and basic assumptions (Schein, 1992). The multiplicity of organizational cultural layers and the range of areas that they cover make them difficult to study. Societal culture differences in various organizations are visible only if the same research methodology is used; but a study that examined how societal culture is reflected in an organization’s culture encountered immense problems when researchers from many different countries were required to cooperate and use the same metrics, as was the case at the GLOBE research project (House et al., 2004). Other researchers have argued that the description of organizational culture as a set of shared assumptions is rather oversimplified and misleading. Myers and Tan (2002) asserted that even the concept of a “national culture” is problematic since the nation-state is a relatively recent phenomenon, has continued to change in its form and makeup, and is composed of more than one cultural group. Martin (2002) proposed that the attribution of meaning (which is a central part of the cultural process) is complicated because of cognitive and normative diversity within an organization, and leads to integration as well as fragmentation, to unity as well as diversity. It is the actor’s reality that forms the basis for further action—people produce and reproduce organizations by means of actions and interactions on a daily basis (routine).
Experiences of Cultures in Global ERP Implementation
Furthermore, culture research has been difficult to conduct (Straub, Loch, Evariso, Karahanna, & Srite, 2002) due to the lack of clear concepts or measures of culture. Another issue is what particular level of culture one should study (Pettigrew, 1990). Some researchers argue that culture cannot be objectively analyzed at a single level. Straub et al. (2002), for example, suggested a more realistic view of culture. In these researchers’ views, individuals are simultaneously influenced by an array of cultural values on the national, ethnic, organizational, or even sub-cultural levels. A final difficulty in studying organizational culture relates to the content under investigation, which is exceptionally large, since culture refers to the process of meaning construction and sense-making. Thus, any attempt to measure and describe culture in general, and organizational culture in particular, must focus on certain parts of the specific culture and disregard other elements. Nonetheless, the necessity to improve research methodologies regarding cross-cultural differences does not reduce the impact that culture has on differences in work behaviors in general (Erez & Earley, 1993) and in information systems implementation in particular (Leidner & Kayworth, 2006).
societal culture and Globalization Many firms struggle with the interpretation, implementation, and impact of globalization on their everyday operations. Adopting a global mindset challenges managers and companies to look beyond their own operations so that they may improve their practice of global management beyond the experience provided by their local businesses. The importance of international thinking lies in its “ability to serve as a bridge between the home country (i.e., the head office’s country) and the local sites’ environments, playing the role of cultural interpreter for both sides” (Jeannet, 2000, p. 37). The road map to strategic global competitiveness assumes that managers have an
understanding of how attributes of societal and organizational cultures affect selected organizational practices. This assumption is the key issue in global ERP implementation. Researchers who have explored the link between information technology and organizational culture claim that a firm-level study of culture’s influence on the use of information systems should not only examine organizational culture, but also its possible interaction with national or organizational sub-culture values and how these interactions potentially influence behaviors (Dube & Robey, 1999; Kaarst-Brown, 2004; Leidner & Kayworth, 2006). At the same time, they assert that in studying cross-cultural differences, research should address three types of methodological biases that have not been sufficiently taken into account (e.g., Hofstede, 1980): construct bias, method bias, and item bias. However, the GLOBE project’s very adequate dataset was able to replicate and extend Hofstede’s landmark study. It was able to test hypotheses relevant to relationships among societal-level variables, organizational practices of three different industries’ sectors (financial, food, and telecommunications) in every country, and leader attributes and behavior. Furthermore, the data was sufficient to replicate middle-management perceptions and unobtrusive measures. The project consisted of three phases related to three empirical studies: Phase One was devoted to the development of scales that assess organizational and societal culture, and culturally shared implicit theories of leadership. Evidence for construct validity of the culture scales was provided from several sources such as Hofstede (1980) and Schwartz (1992) (for a detailed description, see Hanges & Dickson, 2004). Phase Two was devoted to the assessment of nine core attributes of societal and organizational cultures. The nine cultural dimensions that served as independent variables in the GLOBE program are: uncertainty avoidance (Hofstede, 1980), power distance (Hofstede), institutional collectivism (Triandis, 1995),
1299
Experiences of Cultures in Global ERP Implementation
in-group collectivism (Triandis, 1995), gender egalitarianism (based on Hofstede’s Masculinity index), assertiveness (based on Hofstede’s Masculinity Index), future orientation (Kluckhohn & Strodtbeck, 1961), performance orientation (McClelland, 1961), and human orientation (Kluckhohn & Strodtbeck, 1961; Putnam, 1993). When quantified, these attributes were referred to as cultural dimensions, and 62 societal cultures were ranked accordingly, testing hypotheses about the relationship between these cultural dimensions and several important dependent variables such as economic prosperity, success in basic science, societal health, life expectancy, and so forth. Phase Two also investigated the interactive effect of societal-cultural dimensions and industry (finance, food processing, and telecommunication) on organizational practices and culturally endorsed implicit theories of leadership (House et al., 2004). By measuring these cultural dimensions across 62 countries, the GLOBE project liberated organizational behavior research from U.S. hegemony in theory and practice. It is a valuable database that can serve as an important tool for designing global project interventions. In Phase Three, the impact and effectiveness of specific leadership behaviors and styles of CEOs on subordinates’ attitudes and performance over three to five years were investigated. This phase also included testing of the moderating effects of culture on relationships between organizational practices and organizational effectiveness. It seems clear that an ERP system that is developed in one specific region or state and implemented in another region (the regions may differ only in the language spoken) constitutes an inter-cultural encounter that generally ends in conflict on one level or another. These encounters must be managed and implementation processes must be planned to cope with the diverse gaps and expectations. The latter can create antagonism among end users at different levels, leading to partial or deficient implementation. Therefore, one of the most important challenges is ac-
1300
knowledging and appreciating cultural values, practices, and subtleties in different parts of the world. McDonald’s is an illuminating example of cultural sensitivity. In France, McDonald’s serves wine and salads with its burgers. In India, where beef products are taboo, it created a mutton burger named Maharaja Mac. To succeed in global business, managers need the flexibility to respond positively and effectively to practices and values that may be dramatically different to what they are accustomed. However, it is not easy for one to understand and accept practices and values that are different from one’s personal experiences. The GLOBE research project has shown that the status and influence of leaders vary considerably as a result of the cultural force in the countries or regions in which the leaders function.
stAGEs AND ActOrs IN tEcHNOLOGIcAL cHANGE Technological change entails a long process involving different types of actors in each of several stages. Beyond having significant implications for the organization’s form and function, the decision to purchase an ERP system for global implementation necessitates a heavy investment of resources. The uncertainty surrounding the decision and its implications, as well as the implementation itself, lengthens and complicates the process. In the developing body of academic literature related to ERP project implementation, researchers underline that it is important to divide ERP project implementation into several phases or levels, suggesting ‘key activities’ to be included in every phase (Al-Mudimigh, Zairi, & Al-Mashari, 2001; Markus & Tanis, 2000; Parr & Shanks, 2000) and using critical success factors (CSFs) for planning and monitoring ERP implementation. Dividing the process into stages and checking to see that the target is met at each stage may alleviate some degree of uncertainty and allow proper oversight, including clear definition of starting and ending
Experiences of Cultures in Global ERP Implementation
points for every stage. Parr and Shanks (2000) related to the latter as “realistic milestones and end date…” (p. 293). Although the purchase of the system can be viewed as the first stage in the implementation process, there is a consensus among researchers that organizational events and actions that precede the purchasing stage impact system selection and its eventual implementation. These stages are referred to by Al-Mudimigh et al. (2001) as “strategic levels,” including current system evaluation, having a clear sense of vision and objectives, and implementation strategy. While Parr and Shanks (2000) relate to the first stage as the “Planning” stage, including clarification of the system rationale and determination of high-level project scope, Markus and Tanis (2000) call this stage the “Chartering Phase” and it includes the use of a sound business case for a sound assessment of business conditions and needs. The following proposed implementation stages can serve as a common denominator to the above models. They are proposed to address the implementation process as composed of five stages (most models pertain to four stages only)—three of them occur before the new technology is introduced into the organization, and the final two stages occur following the introduction of the system. Although several stages may overlap, it is correct to address each as a separate stage and manage the progress from one stage to the next based on achieving the targets in each stage. Many organizational actors in the organization and its environment are involved in the implementation process, yet their level of involvement may vary from stage to stage. While the organization’s senior managers are the dominant and central actors in the initial stages of the process, responsibility passes to project supervisors appointed by management, and to end users, as the implementation process proceeds. Stages that occur before the system is delivered to the organization include the following.
Initiative stage The need for a new global ERP system in an organization can emerge as a result of institutional isomorphism, a technology crisis or inadequacy, new technology boost, or global business development. In listing the reasons for adopting enterprise systems, Markus and Tanis (2000) differentiate between small companies’ and large companies’ reasons, and identify them as technical vs. business reasons (p. 180). In most cases, actors, within or from outside of the organization, identify a need or decide to promote technological change by initiating the implementation process. As a result of such actions, a careful review of the technological change should be conducted through strategic planning before any decision is made. ERP implementation calls for a major organizational transformation, which must be planned strategically and implemented thoroughly. The process of strategic planning starts with identifying the impetus for changes in the company’s business and IT systems, and their expected strategic and operational benefits. This process helps people understand the need for change; it sparks their interest in it and promotes their commitment to the process (Adams, Sarkis, & Liles, 1995; Al-Mashari, 2003). Strategic planning associated with ERP implementation relates to process design, process performance measurement, and continuous process improvement, also known as business process reengineering (Hammer & Champy, 1993). Additionally, it deals with critical success factors in early phases of ERP implementation (Nah, Zuckweiler, & Lau, 2003). In classical strategic planning, employees or representatives from all organizational ranks are commonly co-opted into the process. The case of ERP implementation at Texas Instruments (TI) described by Sarkis and Sundarraj (2003) illustrates how market forces compelled TI to make a radical shift in its business. TI’s strategic response to these changes identified flexibility and time (speed) as the key strategic performance metrics
1301
Experiences of Cultures in Global ERP Implementation
that had to be stressed for them to be competitive in their business. As a result, this is what they focused on when designing and implementing a new ERP system for the company. Al-Mudimigh et al. (2001) suggested that a company’s decision whether to engage in business process reengineering before, during, or after ERP implementation depends on the company’s specific situation.
Decision stage This stage leads to a decision to invest resources in acquiring and assimilating a new system. When deciding to allocate resources for the purchase of a global ERP system, a cost assessment that includes cost estimations made by representatives of all operational departments (e.g., logistics, training) must be performed. In addition to the cost of the system, this must also include the costs of implementation processes. It is standard practice today to assume that 18% of an ERP implementation budget should be reserved for implementation and consulting (Gartner Group, 2006). The assured availability of resources for the entire implementation process is an important variable in predicting implementation quality (Klein et al., 2001). Between the second and third stages, it is important to examine which system best answers the organizational needs defined in the strategic planning process. Organizational suitability is a compulsory condition, but it is not enough to ensure successful implementation. Transforming an off-the-shelf product into one that is appropriate for a specific organization requires customization. In this process it is essential that developers meet users, therefore potential end users of the new system should be involved in the selection process.
selection and Leader Nomination stage A development or selection process leads to an order to produce a new technological system. Thus,
1302
at this stage, ensuring the involvement of users in system development is an important mechanism for improving system quality and utilization (Baroudi, Olson, & Ives, 1986). Sounder and Nashar (1995) referred to the matching process between new technology and end user needs as the “transfer of technology between a developer and a user” (p.225), and suggested that “failures (of transfer) often occur as a consequence of many natural barriers within the transfer process (e.g., a developer’s choice of an inappropriate technology or a user’s risk aversion).” LeonardBarton and Sinha (1993) stressed that effective internal technology transfer—the implementation of technical systems developed and disseminated to operational sub-units within a single organization—depends not only upon the cost, quality, and compatibilities of the technology, but also upon two processes of interaction between developers and users: user involvement in development, and adaptation by the developers and users of both the technical system itself and the workplace. A link between producers and users is needed to ensure a good fit of any global new ERP system to the organizations that must implement it. Global ERP technology is based on “best practices” solutions, an ideal recipe for the most effective performance of business functions. Still, the organization must ask whether standard solutions fit its organizational requirements. Offthe-shelf ERP software packages are implemented in different organizational contexts, which often deviate considerably from the context in which these packages were originally designed and developed. Boersma and Kingma (2005) noted that the structure of ERPs typically requires the redefinition of work from the actor’s point of view. At this point it is important to nominate the project leader who will manage the fourth and fifth stages of the implementation process and achieve the organizational improvements that the new system was designed to accomplish. Hammer and Champy (1993) suggested that the implementation project leader establish a number of teams
Experiences of Cultures in Global ERP Implementation
composed of potential end users, representing all the operational units where the new system will be implemented. The number of teams depends on the quantity and quality of operational processes that will be forced to undergo massive changes during system implementation. The teams should analyze the future changes and find solutions to problems that are likely to occur during the implementation stage. The leader operates as an advisor, assisted by a steering committee comprising the senior managers who are in charge of decision making during the process. Two of their most important tasks are to define priorities among the various change processes and to arbitrate the conflict of interest disputes that are almost certain to arise. The final two stages occur after the new system is delivered to the organization. However, several activities must be performed between the third and fourth stages, in order to ensure a smooth transfer of responsibility for the implementation and the new system operation to the end users.
basic Operational stage In this stage, the new system is installed in the workplace and employees “…ideally become increasingly skillful, consistent, and committed in their use of the innovation” (Klein & Speer-Sorra, 1996, p. 1057). Management attention should be directed to setting priorities for ERP implementation and influencing the implementation process and project duration (Sheu, Chae, & Yang, 2004). This is the stage for end user training, which—together with goal setting, feedback, and rewards—is a fundamental condition for system operation. The focus in this stage is on system customization; developers should interact with end users in finetuning the system. This is the stage that allows explicit and implicit ‘negotiations’ between groups of users regarding the desired use of the technology and its organizational contribution. Light and Wagner (2006) found in their study that “the use of customization to enable socio-technical integration allows for the recognition of existing forms
of integration and the selective incorporation of existing socio-technical practices with new ones” (p. 225). Thus it is important to acknowledge that organizational actors can respond and influence ERP implementation.
routine Operations stage In this final stage of ERP implementation, the new system becomes established and the employees who operate it adopt stable work patterns. At this stage, the organizational improvements resulting from system operation should be tested, and project leaders should conduct a post-project review of the entire process in order to achieve what Markus and Tanis (2000) called “technological and business flexibility for future developments.” Cultural dimensions impact the entire change process yet play varying roles in the different stages of the ERP implementation process. Since implementation methods are culturally dependent, they should be planned in advance after careful scrutiny of organizational and societal cultures.
tHE INtErFAcE bEtWEEN cULtUrE AND GLObAL ErP IMPLEMENtAtION In global ERP implementation, the initiation, decision, and selection stages are usually dominated by the parent company and may reflect a desire for more control and standardization of work processes. Thus, the new system is imposed ‘top-down’ on most local managers and all end users, turning this process into a deterministic one, with no leeway for local impact related to structural and/or cultural adjustment. Indeed, corporations’ different national and organizational cultures have been shown to be associated with problems during ERP implementation (Krumbholz & Maiden, 2001). Using the contingency theory of organization, Donaldson (1987) presented a model wherein
1303
Experiences of Cultures in Global ERP Implementation
incompatibility between technology and organizational structure results in lowered performance. Donaldson believed that in such cases the organization will eventually make an adjustment to restore compatibility between the new technology and the organizational structure (SARFIT—structural adjustment to regain fit). Applying Donaldson’s argument to global implementation, cultural adjustment must also be addressed to achieve adequate compatibility (Krumbholz & Maiden, 2001; Sheu et al., 2004). Implementation of global ERP systems implies that management must have an international perspective, or what Jeannet (2000) referred to as a “global mindset.” In the various stages of global ERP implementation, different encounters with cultural manifestations are to be expected. Thus, a cross-level analysis of cultural issues is indispensable. To understand the interface between culture and global ERP implementation, the difference between treating organizations as “culture producers” (Rousseau & Cooke, 1988) or as the reflections of the national-societal culture surrounding the organization must be examined. These two approaches can be seen as complementary. Although organizational culture is an internal attribute of an organization, Erez and Earley (1993) pointed out that studies that focus only on the internal dimensions of organizational culture, without considering the broad cultural context in which the organization operates, are deficient. The question of the impact of the broader culture when examining the culture of a specific organization has been considered in numerous studies (e.g., Hofstede, 1980; Schwartz, 1992; Trompenaar, 1996). These studies ranked approximately 60 countries according to cultural characteristics such as individualism/collectivism, power distance, uncertainty avoidance, universalism/particularism, affective/neutral culture, and specific/diffuse culture. In line with this, the following analysis of the interface between culture and ERP implementation relates both to organizational and societal-national culture manifestations. It is important to stress
1304
that the analysis of the link between the implementation phases and cultural influences relates to specific cultural manifestations, since any attempt to measure and describe culture in general, and organizational culture in particular, must be concerned with certain parts of the culture only. In the context of information technology, an important point to be made is that information technologies are not culturally neutral and may come to symbolize a host of different values driven by underlying assumptions and their meaning, use, and consequences (Coombs, Knights, & Willmott, 1992; Robey & Markus, 1984; Scholz, 1990). Using a value-based approach reveals the types of ‘cultural conflicts’ that might arise from the development, adoption, use, and management of IT (Leidner & Kayworth, 2006). Thus, some implementation phases are influenced by the international, national, regional, and/or businesslevel culture, while other phases are influenced by the culture an organization produces. It should be emphasized that implementation of a global ERP system constitutes a force for cultural change, and new cultural dimensions may be expected to emerge in the assimilating organization at the end of the process. In this sense, organizational culture is conceptualized as both a dependent and independent variable. The purpose of portraying the cultural interface in global ERP implementation is to disclose the discrepancies and inconsistencies that must be addressed to restore fit or reduce culture conflict, with the end result being heightened global ERP implementation effectiveness. Mapping the link between the societal and organizational culture and the global implementation of ERP systems requires us to relate separately to the three initial stages, as described before, which take place prior to the installation of the system, and to the last two stages, which occur after the organization has received the system. When a global organization decides to introduce an ERP system into its various worldwide branches, senior management is responsible for executing the first three phases, as outlined above. The decision to
Experiences of Cultures in Global ERP Implementation
acquire a new system is one that is forced upon the organization’s distant branches, which do not have the authority to make decisions of this type. In contrast, the fourth and fifth stages are within the range of authority and responsibility of the individual sites, and consequently, the latter must plan and implement these stages effectively. Only the local office (in which the organizational management resides) carries out all five of the stages. Assuming that the implementation is to be executed in a number of countries, the cultural implications of the global change involved in the first three phases need to be discussed by the central senior management. Stages four and five are assumed to be executed by the organization’s local subsidiaries.
the Importance of societal culture in the First three stages of ErP Implementation The product or end result of the first three stages is: (1) the decision to invest resources in acquiring a global information system, and (2) the selection of a suitable system. The central axis around which these decisions are made is the process of strategic thinking. There are long-term implications of the strategic planning by firms’ intent on becoming global enterprises. Such strategic planning requires a balanced choice between strategic decisions involving numerous variables such as values and problem-solving methods of owners, managers, and end users; formal recruitment procedures; reward systems; and regulation and control processes. As a company expands internationally, it needs to fit its corporate culture to the various societal cultures of its overseas operations to obtain the maximum benefits of the new implemented system. Given that we are discussing a complex process of change, the cultural issues in this process take on critical significance since they may affect
the sequence of countries in which the system is implemented.
The Initiating Stage At this stage it is important to discover the degree of receptivity of companies in a particular culture (represented by a nation) to certain types of innovations, compared to companies in other cultures. This knowledge reflects the overall ‘technological leap’ that could be expected to occur as a result of global ERP implementation. This course of action is supported by empirical findings from several studies. Van Everdingen and Waarts (2003) compared ERP implementation in 10 European countries and found that “national culture has a significant influence on the country adoption rate” (p. 217). The extent of cultural openness (accommodation of another’s culture) has a strong positive influence on the degree to which the technology transfer is successful (Hussain, 1998). IT is less readily adopted in risk-averse and high power distance cultures since technology is perceived as inherently risky (Hasan & Ditsa, 1999; Png, Tan, & Wee, 2001; Srite, 2000; Thatcher, Srite, Stephina, & Liu, 2003). Thus, the impetus for acquiring a global ERP system will most likely come from countries that have the cultural characteristics that allow them to cope with change and uncertainty. Findings from the project GLOBE related to future orientation can be used to assess a country’s openness to change and flexibility. Societies that score higher on future orientation tend to achieve economic success, have organizations with a longer strategic orientation, have flexible and adaptive organizations and managers, and so forth. The two most future-oriented countries are Singapore and Switzerland, and the lowest are Argentina and Russia (Ashkanasy, Gupta, Mayfield, & Trevor-Roberts, 2004, p. 304). Studies on India and Southeast and East Asia showed a process of strategic planning quite distinct from the rational approach dominant in the West. Haley and Tan (1999) observed that
1305
Experiences of Cultures in Global ERP Implementation
“strategic planning in South and Southeast Asia has developed into a process which is ad-hoc and reactive, highly personalized, idiosyncratic to the leader and which uses relatively limited environmental scanning” (p. 96).
appropriate action to sell the system to end users and convince them of its benefits.
The Decision Stage
In the third global ERP implementation stage, two pivotal events must take place: system selection and nomination of a leader to oversee the project.
The decision to invest resources in a new system must be made only after a strategic process is undertaken by the parent company. This process will also help the parent company decide how and in what order the system is to be distributed to different countries. Such decisions are culturally endorsed. Moreover, target country selection has two important implications. The first relates to the examples given in our description of the first phase. The parent company must anticipate differences among the implementing countries in terms of their abilities to cope with change. The higher the implementing country is in uncertainty avoidance and power distance, the more resources should be allocated to them for the implementation process. Support by the parent company may greatly alleviate problems that such countries encounter in the implementation process. Second, different countries may have different perspectives on the strategic goals of the new system based on differences in societal values. For instance: Service quality dimensions function differently across national cultures (Kettingen, Lee, & Lee, 1995). Information privacy concerns were found to vary across countries: countries high in uncertainty avoidance and power distance exhibited higher levels of government involvement in privacy regulation (Milberg, Burk, Smith, & Kallman, 1995). Since agreement on strategic goals is an important condition for implementing and exploiting the new IT system, early mapping of the differences in the perception of these goals is imperative. Gaps will appear as a result of differences in national cultures, but knowing about them beforehand will allow the system’s implementers to take
1306
The System Selection and Leader Nomination Stage
system selection At this stage, managers are shown a demonstration of the system under consideration. Differences in perceptions between developers of the system and managers of the purchasing organization constitute an important consideration in the strategic process of choosing a system. Leidner and Kayworth (2006) claimed that variations across cultural values may lead to differing perceptions and approaches in the manner in which information systems are developed. Thus managers should be aware that the system reflects the developers’ view of the cultural organization, which might deviate considerably from that of their own organization or its strategic needs. These differences were conceptualized by Hazzan and Dubinsky (2005) in their discussion of the connections between a national culture and the culture inspired by software development methods. According to their proposed model of the tightness of Software Development Methods (SDM) and the tightness of a national culture, the fitness of a given SDM and a national culture can predict the degree to which a given SDM will be accepted by a specific national culture. Thus, the system developers and the organization’s IT people must be involved in the ERP selection process. Since IT developers are in charge of technical assistance during the implementation processes, it is important to understand that this assistance is also culturally dependent: Hassan and Ditsa (1999) found that IT staff are able to give advice to IT managers in countries with low power distance.
Experiences of Cultures in Global ERP Implementation
This finding has implications for the fourth stage of implementation as well, in which end users operating the system require technical assistance in coping with problems that emerge. However, when cultural differences between the developer’s support personnel and end users are significant, they may impede the implementation process.
Project Leaders for Global ErP Implementation Global ERP implementation implies confronting work situations charged with dynamic cultural issues. Elaborating effective solutions for global implementation activities, influenced by nuances of culture, pose difficult challenges that may confound even the most skilled leader. Brake, Walker, and Walker (1995) portrayed the ideal global implementation leader as a strategic architect-coordinator who is able to recognize opportunities and risks across national and functional boundaries, and is sensitive and responsive to cultural differences. The implementation project leader should have a “global mindset” since global ERP system implementation requires fundamental changes in managerial practices in domestic as well as international organizations. The GLOBE project offers detailed examples of the cultural manifestation of countries and leadership attribution. The GLOBE’s major empirical contribution to this stage in the implementation process was its identification of universally desirable and culturally contingent attributes of leadership. GLOBE’s findings showed the relationships among cultural dimensions, organizational practices, and culturally endorsed leadership dimensions.
the Importance of Organizational culture in the Fourth and Fifth stages of ErP Implementation Transfer of technology from the manufacturer to the end-user begins when the decision is made to implement the system in different branches
of the organization worldwide. The products of these stages should include: the exploitation of the new system, final customization of the system to the needs of end users and to the special needs of specific organizations, a post-project review process, and a cultural change in the organization (if required). The central axis around which these stages revolve relates to employee management, and it involves classic areas in organizational behavior and human resource management such as training, learning, and performance goal setting and reward. In these stages of the process, the relevant cultural dimensions concern the differences within the organizational culture. An organizational culture is not uniform; it comprises horizontal as well as vertical sub-cultures. A horizontal subculture may be created on the basis of professional practices—a sub-culture of physicians, nurses, and managers (Trice, 1993). Similarly, a vertical sub-culture may emerge, based on the organizational structure—a sub-culture of the production department, the R&D department, or the HR department (Schein, 1996), or a sub-culture of technology-intensive departments as opposed to low-technology departments (Brainin et al., 2005). These sub-cultures imply that there will be differences in the implementation of the new IT system because each sub-culture perceives the role and function of the IT system in its work differently. This process of ascribing meaning to technology is labeled the “social construction of technology.” Furthermore, different occupational sub-cultures have entirely different cultural interpretations of proposed technologies, and experience conflict and resistance to adopting certain technologies (Von Meier, 1999). Researchers found that clashing values among organizational sub-cultures hinder the information sharing and collaboration needed to effectively integrate technology. In this stage it is important to appoint champions of technological change from within the organization (Howell & Higgins, 1990) to effectively lead the implementation process. Management
1307
Experiences of Cultures in Global ERP Implementation
style also influences the implementation approach and project duration. Management style relates to the attitude toward setting priorities for implementing an ERP system (Sheu et al., 2004). It must be stressed that societal culture also has an influence on the organization’s activities, and therefore the techniques used in human resource management, as explained above, must be adapted to the local culture, for example, societal culture was found to influence instruction processes (Earley, 1994), goal setting (Erez, Earley, & Hullin, 1985), and many other areas. There are organizational culture characteristics that facilitate technology implementation. Organizations with a high learning orientation are better able to adjust to changes on the whole (Lipshitz, Popper, & Oz, 1996), and to technological changes in particular (Brainin & Erez, 2002; DeLong & Fahey, 2003), because they have learning channels and can learn from experience. This is a very important condition for implementing new technology. Global ERP implementation can result in culture transformation over time.
cONcLUsION A Managerial Agenda for a Positive cultural Experience during Global ErP Implementation The preceding discussion set forth to explain how variations in cultural aspects, which surface as a result of global ERP implementation, may be handled, based on the assumption that global ERP systems are not universally acceptable and effective. The increasing interrelations among countries and the globalization of corporations do not imply that cultural differences are disappearing or diminishing. On the contrary, as economic boundaries are eliminated, cultural barriers may present new challenges and opportunities in business. When different cultures come into contact, they may converge in some aspects, but their idiosyncrasies
1308
will likely amplify. As a means to integrate work processes, ERP implementation becomes a method for organizational boundary crossing and requires special handling to overcome the resistance this change induces. The above discussion provides a detailed examination of the cultural discrepancies that arise at various stages of the implementation process. Cultural boundaries to be crossed as a result of global ERP implementation within the organization include: organizational sub-cultures as a result of organizational structure, and organizational sub-cultures as a result of different professions and roles. Further, between organizations this includes societal culture as a result of cultural differences between countries. To overcome the problems arising from global ERP implementation, it was suggested that crossing the cultural boundary be treated as a process of cultural exchange, instead of as a cultural conflict. In order to restore fit and to prevent poor global ERP implementation and resistance to change, managers must harness the ERP systems to their needs by adapting it to their set of beliefs, thereby breaking the link between technology and Western logic. It requires them: to simultaneously examine their positions and behaviors and those of end users regarding ERP use, and analyze their reciprocal impact; to help formulate policy regarding utilization of ERP systems in different organizational settings across countries or regions; to coordinate expectations between themselves and end users in order to narrow the cultural gap; and to raise the consciousness of ERP engineers and designers regarding the impact of differences in the sociocultural attributes of potential managers and end users, and hence increase their effective fit into different societies. The following recommendations are proposed: Organizational leaders must be made aware of cultural differences and be prepared to cope with them. Financial resources must be available to allow investment in a complete mapping of the cultural gaps that may manifest themselves in the second implementation stage. Multicultural
Experiences of Cultures in Global ERP Implementation
groups of project managers—glocal teams—and team leaders (see Appendix) can overcome the lack of communication between the host and subsidiaries that often results in mistrust, project delay, and budget overruns. Moreover, such groups will foster informal communication among representatives of different nationalities, which is critical for the success of global ERP projects, and will limit formal documentation channels that people tend to use in the absence of such groups. Solutions may include: 1. 2. 3.
4.
Multi-national teams for developing and implementing as a key to success Local examination of the technological leap Flexibility in implementation stages—even if the strategic stages were carried out in a deterministic fashion (were imposed on the organization), the later stages may allow participative actions and be culturally appropriate. Involving users in the design of global ERP systems—although determining the key actors in user groups is especially challenging in international settings, their involvement may partially assuage subsequent perception conflicts since the greater the extent to which a user’s group values are embedded in a system, the less vision conflict is expected.
The chapter addressed aspects of organizational and societal culture that are of explicit concern to ERP implementation in global organizational economies, and recommended managerial mechanisms for overcoming major obstacles in this process.
FUtUrE rEsEArcH DIrEctIONs Cultural differences may impede global ERP implementation. Although the GLOBE project, used in this chapter to recognize and discuss differences in societal cultures, provides a profile
of cultural dimensions for each society, it does not present a behavioral profile. Future research related to differences in societal culture is required to build up in-depth understanding of how people actually function and manifest different cultural attributes, to investigate how different cultural dimensions interact, and what the relative importance of each dimension in understanding each culture is (Javidan et al., 2004b). The suggested managerial mechanisms proposed for formulating policy regarding utilization of ERP systems in different organizational settings across countries or regions was the establishment of multicultural groups of project managers—glocal teams—and appointing team leaders. However, multicultural teams can face high levels of conflict and misunderstandings. Such teams need to develop a shared meaning system through socialization to the team, and through contacts and interactions among team members. Their ability to serve as ‘mediators’ and represent their countries is contingent upon collaboration dynamics that must overcome cultural differences. Research related to multicultural teams is scarce (Earley & Gibson, 2002; Earley & Mosakowski, 2000; Erez & Gati, 2004). Shokef and Erez (2006) suggested that a ‘glocal’ identity represents both the global identity and a strong local identity, and it seems to enable individuals to shift from one social context to another. Future research is needed to explore the coexistence of multiple identities and its contribution to glocal team work. Another research challenge relates to the issue of team leaders for multicultural teams—what are adequate selection and training processes? Current theories in management and psychology do not provide sufficient frameworks to explain the successes or failures of people working and managing in foreign cultures. It is suggested that the measure of one’s ‘cultural intelligence’ (CQ) be used as a predictor of an outsider’s natural ability to interpret and respond to unfamiliar signals in an appropriate manner (Earley, Ang, & Joo-Seng, 2006). Accordingly, a manager with high CQ can enter new cultural
1309
Experiences of Cultures in Global ERP Implementation
settings—national, professional, organizational, regional—and immediately understand what is happening and why, confidently interacting with people and engaging in the right actions. However, there is very little empirical research that explores this issue. The future research options presented in this chapter should serve to stimulate and challenge researchers to explore new research areas that will contribute to work experiences in a global economy.
rEFErENcEs Adams, S., Sarkis, J., & Liles, D. H. (1995). The development of strategic performance metrics. Engineering Management Journal, 1, 24–32. Al-Mashari, M. (2003). A process change-oriented model for ERP application. International Journal of Human-Computer Interaction, 16(1), 39–55. doi:10.1207/S15327590IJHC1601_4 Al-Mudimigh, A., Zairi, M., & Al-Mashari, M. (2001). ERP software implementation: An integrative framework. European Journal of Information Systems, 10, 216–226. doi:10.1057/palgrave. ejis.3000406 Ashkanasy, N., Gupta, V., Mayfield, M. S., & Trevor-Roberts, E. (2004). Future orientation. In R.J. House, P.J. Hanges, M. Javidan, P.W. Dorfman, & V. Gupta (Eds.), Culture, leadership, and organizations (pp. 282-342). London: Sage. Baroudi, J. J., Olson, M. H., & Ives, B. (1986). Empirical study of the impact of user involvement on system usage and information satisfaction. Communications of the ACM, 29(3), 232–238. doi:10.1145/5666.5669 Bijker, W. E., Hughes, T. P., & Pinch, T. (Eds.). (1987). The social construction of technological systems. New directions in the sociology and history of technology. Cambridge, MA: MIT Press.
1310
Boerma, K., & Kingma, S. (2005). Developing a cultural perspective on ERP. Business Process Management Journal, 11(2), 123–136. doi:10.1108/14637150510591138 Brainin, E., & Erez, M. (2002, April). Technology and culture: Organizational learning orientation in the assimilation of new technology in organizations. In Proceedings of the 3rd European Conference on Organizational Learning (OKLC), Athens, Greece. Brainin, E., Gilon, G., Meidan, N., & Mushkat, Y. (2005). The impact of intranet integrated patient medical file (IIPMF) assimilation on the quality of medical care and organizational advancements. Report # 2001/49 submitted to the Israel National Institute for Health Policy and Health Services. Brake, T., Walker, M. D., & Walker, T. (1995). Doing business internationally. New York: McGraw-Hill. Coombs, R., Knights, D., & Willmott, H. C. (1992). Culture, control, and competition: Towards a conceptual framework for the study of information technology in organizations. Organization Science, 13(1), 51–72. doi:10.1177/017084069201300106 Davenport, T. H. (1998). Putting the enterprise in the enterprise system. Harvard Business Review, 76, 121–131. Davison, R. (2002). Cultural complications of ERP: Valuable lessons learned from implementation experience in parts of the world with different cultural heritage. Communications of the ACM, 45(7), 109–111. doi:10.1145/514236.514267 DeLong, D. W., & Fahey, L. (2003). Diagnosing cultural barriers to knowledge management. The Academy of Management Executive, 14(4), 113–127.
Experiences of Cultures in Global ERP Implementation
Doherty, N. F., & Doing, G. (2003). An analysis of the anticipated cultural impact of the implementation of data warehouse. IEEE Transactions on Engineering Management, 50(1), 78–88. doi:10.1109/TEM.2002.808302
Erez, M., & Gati, E. (2004). A dynamic, multilevel model of culture: From the micro-level of the individual to the macro-level of a global culture. Applied Psychology: An International Review, 35(4), 583–598.
Donaldson, L. (1987). Strategy and structural adjustment to regain fit and performance: In defense of contingency theory. Journal of Management Studies, 24, 1–24. doi:10.1111/j.1467-6486.1987. tb00444.x
Ford, D. P., Connelly, C. E., & Meister, D. B. (2003). Information system research and Hofstede’s culture’s consequences: An uneasy and incomplete partnership. IEEE Transactions on Engineering Management, 50(1), 8–25. doi:10.1109/ TEM.2002.808265
Dube, L., & Robey, D. (1999). Software stories: Three cultural perspectives on the organizational context of software development practices. Accounting Management and Information Technologies, 9(4), 223–259. doi:10.1016/S09598022(99)00010-7 Earley, C. P., Ang, S., & Joo-Seng, T. (2006). CQ: Developing cultural intelligence at work. New York: Oxford University Press. Earley, C. P., & Gibson, C. B. (2002). Multinational work teams: A new perspective. Mahwah, NJ: Lawrence Erlbaum. Earley, C. P., & Mosakowski, E. (2000). Creating hybrid team cultures: An empirical test of transnational team functioning. Academy of Management Journal, 43(1), 26–49. doi:10.2307/1556384 Earley, P. C. (1994). Self or group? Cultural effects of training on self efficacy and performance. Administrative Science Quarterly, 39, 89–117. doi:10.2307/2393495 Erez, M., & Earley, C. P. (1993). Culture, self identity and work. New York: Oxford University Press. Erez, M., Earley, P. C., & Hullin, C. L. (1985). The impact of participation on goal acceptance and performance: A two-step model. Academy of Management Journal, 28, 50–66. doi:10.2307/256061
Frost, J. F., Moore, L. F., Louis, M. R., Lundberg, C. C., & Martin, J. (Eds.). (1985). Organizational culture. Thousand Oaks, CA: Sage. Gartner Group. (2006). The Gartner scenario 2006: The current state and future direction of IT. Retrieved from http://www.gartner.com Grant, D., Hall, R., Wailes, N., & Wright, C. (2006). The false promise of technological determinism: The case of enterprise resource planning systems. New Technology, Work and Employment, 21(1), 2–15. doi:10.1111/j.1468-005X.2006.00159.x Green, S. G., Gavin, M. B., & Aiman-Smith, L. (1995). Assessing a multidimentional measure of radical technological innovation. IEEE Transactions on Engineering Management, 42(3), 203–214. doi:10.1109/17.403738 Haley, G., & Tan, C. T. (1999). East vs. west: Strategic marketing management meets the Asian networks. Journal of Business and Industrial Marketing, 14(2), 91–101. doi:10.1108/08858629910258973 Hall, E. T., & Hall, M. R. (1990). Understanding cultural differences. Yarmouth, ME: Intercultural Press. Hammer, M., & Champy, J. (1993). Reengineering the corporation. New York: HarperCollins.
1311
Experiences of Cultures in Global ERP Implementation
Hanges, P. J., & Dickson, M. W. (2004). The development and validation of the GLOBE culture and leadership scales. In R.J. House, P.J. Hanges, M. Javidan, P.W. Dorfman, & V. Gupta (Eds.), Culture, leadership, and organizations (pp. 122151). London: Sage. Hasan, H., & Ditsa, G. (1999). The impact of culture on the adoption of IT: An interpretive study. Journal of Global Information Management, 7(1), 5–15. Hazzan, O., & Dubinksy, Y. (2005). Clashes between culture and software development methods: The case of the Israeli hi-tech industry and extreme programming. Proceedings of the Agile 2005 Conference (pp. 59-69), Denver, CO. Hofmann, D. A., & Stetzer, A. (1996). A crosslevel investigation of factors influencing unsafe behaviors and accidents. Personnel Psychology, 49, 307–339. doi:10.1111/j.1744-6570.1996. tb01802.x Hofstede, G. (1980). Culture’s consequences. London: Sage. Hofstede, G. (2001). Culture’s consequences: Comparative values, behaviors, institutions and organizations across nations. Thousand Oaks, CA: Sage. House, R. J., Hanges, P. J., Javidan, M., Dorfman, P. W., & Gupta, V. (2004). Culture, leadership, and organizations. London: Sage. Howell, J. M., & Higgins, C. (1990). Champions of technological innovation. Administrative Science Quarterly, 35, 317–341. doi:10.2307/2393393 Huang, J. C., Newell, S., Galliers, R. D., & Pan, S. (2003). Dangerous liaisons? Component-based development and organizational subcultures. IEEE Transactions on Engineering Management, 50(1), 89–99. doi:10.1109/TEM.2002.808297
1312
Hussain, S. (1998). Technology transfer model across culture: Brunei-Japan joint ventures. International Journal of Social Economics, 25(6-8), 1189–1198. doi:10.1108/03068299810212676 Javidan, M., House, R. J., & Dorfman, P. W. (2004a). A nontechnical summary of GLOBE findings. In R.J. House, P.J. Hanges, M. Javidan, P.W. Dorfman, & V. Gupta (Eds.), Culture, leadership, and organizations (pp. 29-48). London: Sage. Javidan, M., House, R. J., Dorfman, P. W., Gupta, V., Hanges, P. L., & Sully de Leque, M. (2004b). Conclusions and future directions. In R. J. House, P. J. Hanges, M. Javidan, P. W. Dorfman, & V. Gupta (Eds.), Culture, leadership, and organizations (pp. 723-732). London: Sage. Jeannet, J. P. (2000). Managing with a global mindset. London: Prentice Hall. Kaarst-Brown, M. L. (2004). How organizations keep information technology out: The interaction of tri-level influence on organizational and IT culture. Working Paper IST-MLKB: 2004-2, School of Information Studies, Syracuse University, USA. Kettingen, W. J., Lee, C. C., & Lee, S. (1995). Global measures of information service quality: A cross-national study. Decision Sciences, 26(5), 569–588. doi:10.1111/j.1540-5915.1995. tb01441.x Klein, K. J., Conn, A. B., & Speer-Sorra, J. S. (2001). Implementing computerized technology: An organizational analysis. The Journal of Applied Psychology, 86(5), 811–824. doi:10.1037/00219010.86.5.811 Klein, K. J., & Speer-Sorra, J. (1996). The challenge of innovation implementation. Academy of Management Review, 21(4), 1055–1080. doi:10.2307/259164 Kluckhohn, F. R., & Strodtbeck, F. L. (1961). Variations in value orientation. New York: HarperCollins.
Experiences of Cultures in Global ERP Implementation
Krumbholz, M., & Maiden, N. (2001). The implementation of enterprise resource planning packages in different organizational and national cultures. Information Systems, 26, 185–204. doi:10.1016/S0306-4379(01)00016-3
Markus, L. M., & Tanis, C. (2000). The enterprise system experience—From adoption to success. In R.W. Zmude (Ed.), Framing the domain of IT management (pp. 173-207). Pinnaflex Education Resources.
Kumar, L., & Hillegersberg, J. (2000). ERP experiences and evolution. Communications of the ACM, 43(3), 22–26. doi:10.1145/332051.332063
Martin, J. (2002). Organizational culture. Mapping the terrain. London: Sage.
Kunda, G. (1992). Engineering culture: Control and commitment in a high-tech corporation. Philadelphia: Temple University Press.
McClelland, D. C. (1961). The achieving society. Princeton, NJ: Van Nostrand.
Lane, P. J., Koka, B. R., & Pathak, S. (2006). The reification of absorptive capacity: A critical review and rejuvenation of the construct. Academy of Management Review, 31(4), 833–863.
Milberg, S. J., Burk, S. J., Smith, J. H., & Kallman, E. A. (1995). Rethinking copyright issues and ethics on the Net: Values, personal information privacy, and regulatory approaches. Communications of the ACM, 38(12), 65–73. doi:10.1145/219663.219683
Lee, A. S. (1991). Integrating positivist and interpretive approaches to organizational research. Organization Science, 2(4), 342–365. doi:10.1287/ orsc.2.4.342
Myers, M. D., & Tan, F. B. (2002). Beyond models of national culture in information systems research. Journal of Global Information Management, 10(1), 24–33.
Leidner, D. E., & Kayworth, T. (2006). A review of culture in information systems research: Towards a theory of information technology culture conflict. MIS Quarterly, 30(2), 357–399.
Nah, F. F., Zuckweiler, K. M., & Lau, J. L. (2003). ERP implementation: Chief information officers’ perceptions of critical success factors. International Journal of Human-Computer Interaction, 16(1), 5–22. doi:10.1207/S15327590IJHC1601_2
Leonard-Barton, D., & Sinha, D. K. (1993). Developer-user interaction and user satisfaction in internal technology transfer. Academy of Management Journal, 36(5), 1125–1139. doi:10.2307/256649 Light, B., & Wagner, E. (2006). Integration in ERP environments: Rhetoric, realities and organizational possibilities. New Technology, Work and Employment, 21(3), 215–228. doi:10.1111/j.1468005X.2006.00176.x Lipshitz, R., Popper, M., & Oz, S. (1996). Building learning organizations: The design and implementation of organizational learning mechanisms. The Journal of Applied Behavioral Science, 32, 292–305. doi:10.1177/0021886396323004
Orlikowski, W. J. (2000). Using technology and constituting structures: A practical lens for studying technology in organizations. Organization Science, 11(4), 404–428. doi:10.1287/ orsc.11.4.404.14600 Parr, A., & Shanks, G. (2000). A model of ERP project implementation. Journal of Information Technology, 15(4), 289–301. doi:10.1080/02683960010009051 Pettigrew, A. M. (1990). Organizational climate and culture: Two constructs in search of a role. In B. Schneider (Ed.), Organizational climate and culture (pp. 413-433). San Francisco: Jossey-Bass.
1313
Experiences of Cultures in Global ERP Implementation
Png, I. P., Tan, B. C. Y., & Wee, K. L. (2001). Dimensions of national cultures and corporate adoption of IT infrastructure. IEEE Transactions on Engineering Management, 48(1), 36–45. doi:10.1109/17.913164
Scholz, C. (1990). The symbolic value of computerized information systems. In P. Galiardi (Ed.), Symbols and artifacts: Views of the corporate landscape (pp. 233-254). New York: Aldin De Gruyter.
Putnam, R. D. (1993). Making democracy work. Princeton, NJ: Princeton University Press.
Schwartz, S. H. (1992). Universals in the content and structure of values: Theoretical advances and empirical test in 20 countries. In M.P. Zanna (Ed.), Advances in experimental social psychology (vol. 25, pp. 1-65). San Diego: Academic Press.
Robey, D., & Markus, M. L. (1984). Rituals in information systems design. MIS Quarterly, 8(1), 5–15. doi:10.2307/249240 Robey, D., Ross, J. W., & Boudreau, M. (2000). Learning to implement enterprise systems: An exploratory study of the dialectics of change. Journal of Management Information Systems, 19(1), 17–46. Rousseau, D. M. (1990). Assessing organizational culture: The case of multiple methods. In B. Schneider (Ed.), Organizational climate and culture (pp. 153-192). San Francisco: Jossey-Bass. Rousseau, D. M., & Cooke, A. R. (1988). Culture of high reliability: Behavioral norms aboard a U.S. aircraft carrier. In Proceedings of the Academy of Management Meeting, Anaheim, CA. Sackmann, S. (1992). Cultures and subcultures: An analysis of organizational knowledge. Administrative Science Quarterly, 37, 140–161. doi:10.2307/2393536 Sarkis, J., & Sundarraj, R. P. (2003). Managing large-scale global enterprise resource planning systems: A case study at Texas Instruments. International Journal of Information Management, 23, 431–442. doi:10.1016/S0268-4012(03)00070-7 Schein, E. H. (1992). Organizational culture and leadership. San Francisco: Jossey-Bass. Schein, E. H. (1996). Culture: The missing concept in organizational studies. Administrative Science Quarterly, 41, 229–240. doi:10.2307/2393715
1314
Sheu, C., Chae, B., & Yang, C. (2004). National differences and ERP implementation: Issues and challenges. Omega, 32(5), 361–372. doi:10.1016/j. omega.2004.02.001 Shokef, E., & Erez, M. (2006). Shared meaning systems in multiclutural teams. In B. Mannix, M. Neae, & Y.-R. Chen (Eds.), National culture and groups. Research on managing groups and teams (vol. 9, pp. 325-352). San Diego: Elsevier, JAI Press. Sounder, W. E., & Nashar, A. S. (1995). A technology selection model for new products development and technology transfer. In L.R. Gomez-Mejia & M.W. Lawless (Eds.), Advances in high-technology management (vol. 5, pp. 225244). Greenwich, CT: JAI Press. Srite, M. (2000). The influence of national culture on the acceptance and use of information technologies: An empirical study. Unpublished Doctoral Dissertation, Florida State University, USA. Straub, D., Loch, K., Evariso, R., Karahanna, E., & Srite, M. (2002). Towards a theory-based measurement of culture. Journal of Global Information Management, 10(1), 13–23. Thatcher, J. B., Srite, M., Stephina, L. P., & Liu, Y. (2003). Culture overload and personal innovativeness with information technology: Extending the nomological net. Journal of Computer Information Systems, 44(1), 74–81.
Experiences of Cultures in Global ERP Implementation
Triandis, H. C. (1995). Individualism and collectivism. Boulder, CO: Westview. Trice, H. M. (1993). Occupational subcultures in the workplace. Ithaca, NY: ILR Press. Trompenaars, F. (1996). Resolving international conflict: Culture and business strategy. Business Strategy Review, 7(3), 51–68. doi:10.1111/j.1467-8616.1996.tb00132.x Van Everdingen, Y. M., & Waarts, E. (2003). The effect of national culture on the adoption of innovations. Marketing Letters, 14(3), 217–232. doi:10.1023/A:1027452919403 Von Meier, A. (1999). Occupational cultures as challenge to technological innovation. IEEE Transactions on Engineering Management, 46(1), 101–114. doi:10.1109/17.740041
ADDItIONAL rEADING Avgerou, C., & Madon, S. (2004). Framing IS studies: Understanding the social context of IS innovation. In C. Avgerou, C. Cibborra, & F. Land (Eds.), The social study of information and communication technology (pp. 162-182). New York: Oxford University Press. Cao, J., Crews, J. M., Lin, M., Deokar, A., Burgoon, B. K., & Nunamaker, J. F. Jr. (2007). Interaction between system evaluation and theory testing: A demonstration of the power of a multifaceted approach to information system research. Journal of Management Information Systems, 22(4), 207–235. doi:10.2753/MIS0742-1222220408 Drori, G. S. (2006). Global e-litism, digital technology, social inequality, and transnationality. New York: Worth.
Gallivan, M., & Srite, M. (2005). Information technology and culture: Identifying fragmentary and holistic perspectives of culture. Information and Organization, 15, 295–338. doi:10.1016/j. infoandorg.2005.02.005 Gelfand, M. J., Nishii, L. H., & Raver, J. L. (2006). On the nature and importance of cultural tightness-looseness. The Journal of Applied Psychology, 91(6), 1225–1245. doi:10.1037/00219010.91.6.1225 Gibson, C. B. (1997). Do You hear what I hear? A framework for reconciling intercultural communication difficulties arising from cognitive styles and cultural values. In C.P. Earley & M. Erez (Eds.), New perspectives on international industrial/organizational psychology (pp. 335362). San Francisco: New Lexington Press. Gregor, S. (2006). The nature of theory in information systems. MIS Quarterly, 30(3), 611–642. Howcroft, D., & Light, B. (2006). Reflections on issues of power in packaged software selection. Information Systems Journal, 16(3), 215–235. doi:10.1111/j.1365-2575.2006.00216.x Iivari, J., & Huisman, M. (2007). The relationship between organizational culture and the deployment of systems development methodologies. MIS Quarterly, 31(1), 35–58. Iivari, N. (2006). ‘Representing the user’ in software development—A cultural analysis of usability work in the product development context. Interacting with Computers, 18, 635–664. doi:10.1016/j.intcom.2005.10.002 Inglehart, R. (2000). Globalization and postmodern values. The Washington Quarterly, 23(1), 215–228. doi:10.1162/016366000560665
Erumban, A. A., & Jong, S. B. (2006). Cross-country differences in ICT adoption: A consequence of culture? Journal of World Business, 41, 302–314. doi:10.1016/j.jwb.2006.08.005
1315
Experiences of Cultures in Global ERP Implementation
Kallinikos, J. (2004). Farewell to constructivism: Technology and context-embedded action. In C. Avgerou, C. Cibborra, & F. Land (Eds.), The social study of information and communication technology (pp. 140-161). New York: Oxford University Press. Karahanna, E., Evariso, R. J., & Srite, M. (2005). Levels of culture and individual behavior: An integrative perspective. Journal of Global Information Management, 13(2), 1–20. Ko, D., Kirsch, L. J., & King, W. R. (2005). Antecedents of knowledge transfer from consultants to clients in enterprise system implementations. MIS Quarterly, 29(1), 59–85. Kraemmergaard, P., & Rose, J. (2002). Managerial competences for ERP journeys. Information Systems Frontiers, 4(2), 199–211. doi:10.1023/A:1016054904008 Lippert, S. K., & Volkmar, J. A. (2007). Cultural effects on technology performance and utilization: A comparison of U.S. and Canadian users. Journal of Global Information Management, 15(2), 56–90. Martin, J. (1992). Cultures in organizations. New York: Oxford University Press. Sassen, S. (2004). Towards a sociology of information technology. In C. Avgerou, C. Cibborra, & F. Land (Eds.), The social study of information and communication technology (pp. 77-99). New York: Oxford University Press. Sawyer, S. (2001). A market-based perspective on information system development. Communications of the ACM, 44(11), 97–102. doi:10.1145/384150.384168
1316
Shepherd, C. (2006). Constructing enterprise resource planning: A thoroughgoing interpretivist perspective on technological change. Journal of Occupational and Organizational Psychology, 79, 357–376. doi:10.1348/096317906X105742 Srite, M., & Karahanna, E. (2006). The role of espoused national cultural values in technology acceptance. MIS Quarterly, 30(3), 679–704. Tan, B. C. Y., Smith, J. H., Keil, M., & Montelealegre, R. (2003). Reporting bad news about software projects: Impact of organizational climate and information asymmetry in an individualistic and a collectivistic culture. IEEE Transactions on Engineering Management, 50(1), 64–77. doi:10.1109/TEM.2002.808292 Tolbert, A. S., McLean, G. N., & Myers, R. C. (2002). Creating the global learning organization (GLO). International Journal of Intercultural Relations, 26, 463–472. doi:10.1016/S01471767(02)00016-0 Walsham, G. (2002). Cross-cultural software production and use: A structurational analysis. MIS Quarterly, 26(4), 359–382. doi:10.2307/4132313 Weick, K. E. (2007). The generative properties of richness. Academy of Management Journal, 50(1), 14–19. Weisinger, J. Y., & Trauth, E. M. (2003). The importance of situating culture in cross-cultural IT management. IEEE Transactions on Engineering Management, 50(1), 26–30. doi:10.1109/ TEM.2002.808259
Experiences of Cultures in Global ERP Implementation
APPENDIX Table 1. Stage No.
Stage Name
Experiences of Culture
Result
Solution
1
Initiation
Differences in societal cultures between initiating and implementing countries
Cultural gaps and a high level of technological leap
Glocal teams with representatives from all the countries
2
Decision 1. Strategic thinking 2. Resource allocation
1. Differences in perceptions of strategic goals 2. Differences in decision making
Need for promoting and persuading of the strategic goals and convincing the end users
Glocal teams with representatives from all the countries Local teams
3
System Selection
Language gaps, differences in developers’ and owners’ perceptions regarding critical performance factors
Misfit of the system to the societal culture
Glocal teams including developers and end user representatives of the countries
Leader Nomination
Unique characteristics of a global approach–global mindset
Insensitive and unresponsive to cultural differences
Global selection processes
4
5
Basic Operation 1. Exploiting the system 2. Additional technical matching
Differences in sub-cultures
Inadequate training and goal setting processes
Local teams and representatives of the glocal team
Routine Operation and system exploitation: 1. Post-project analysis 2. Cultural change
Differences in management style and levels of learning orientation Culture
Lack of synergy
Local teams and representatives on the glocal team
This work was previously published in Enterprise Resource Planning for Global Economies: Managerial Issues and Challenges, edited by Carlos Ferran and Ricardo Salim, pp. 167-188 , copyright 2008 by Information Science Reference (an imprint of IGI Global).
1317
1318
Chapter 5.9
Enterprise Resource Planning Systems:
Effects and Strategic Perspectives in Organizations Alok Mishra Atilim University, Turkey
AbstrAct
INtrODUctION
In the age of globalization, organizations all over the world are giving more significance to strategy and planning to get an edge in the competition. This chapter discusses the Enterprise Resurce Planning (ERP) systems effects and strategic perspectives in organizations. These are significant how information technology and ERP together facilitate in aligning the business in such a way so that it should lead to excellent productivity. It further explores in what ways effects of ERP system in organizations can provide sustained competitive advantage.
Enterprise Resource Planning (ERP) software is one of the fastest growing segments of business computing today (Luo and Strong, 2004) and ERPs are one of the most significant business software investments being made in this new era (Beard and Sumner, 2004). Davenport (1998) has declared that ‘the business worlds’s embrace of enterprise systems may in fact be the most important development in the corporate use of information technology in the 1990’s. Mabert et al. (2001) noted that industry reports suggests as many as 30,000 companies worldwide have implemented ERP systems. According to a report by Advanced Manufacturing Research, the
DOI: 10.4018/978-1-59904-859-8.ch005
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Enterprise Resource Planning Systems
ERP software market is expected to grow from $ 21 billion in 2002 to $ 31 billion in 2006 and the entire enterprise applications market which includes Customer Relationship Management and Supply Chain Management software will top $ 70 billion (AM Research, 2002). Further, AMR Research has projected as much as $ 10 billion in global investments in ERP (as cited in Kalling, 2003). The ERP market is projected to grow from a current $15 billion to $ 50 billion in the next five years and to reach $ 1 trillion by 2010 (Bingi et al., 1999). ERP systems offer the advantage of providing organizations with a single, integrated software system linking the core business activities such as oprations, manufacturing, sales, accounting, human resources, and inventory control (Lee and Lee, 2000; Newell et al., 2003; Shanks and Seddon, 2000). According to Brown and Vessey (2003) this integrated perspective may be the first true organization-wide view available to management. According to Lee and Myers (2004) much of the literature on ERP implementation suggests that ERP systems should support the strategic objectives of the organization. They observed that some ERP vendors tend to assume that implementing their products is a straightforward translation from strategy to IT-enabled business processes. ERP helps organizations to meet the challenges of globalization with a comprehensive, integrated application suite that includes next-generation analytics, human capital management, financials, operations, and corporate services. With support for industry-specific best practices, ERP helps organizations improve productivity, sense and respond to market changes, and implement new business strategies to develop and maintain a competitive edge. ERP is designed to help businesses succeed in the global marketplace by supporting international legal and financial compliance issues and enabling organizations to adapt internal operations and business processes to meet country-specific needs. As a result, organizations can focus on improving productivity and serving their customers instead of struggling to ensure they
are in compliance with business and legal requirements around the world. Companies that automate and streamline workflows across multiple sites (including suppliers, partners, and manufacturing sites) produced 66% more improvement in reducing total time from order to delivery, according to Aberdeen’s 2007 study of the role of ERP in globalization. Those companies that coordinate and collaborate between multiple sites, operating as a vertically integrated organization, have achieved more than a 10% gain in global market share. The majority of companies studied (79%) view global markets as a growth opportunity, but of those companies, half are also feeling pressures to reduce costs (Jutras, 2007). Those companies that coordinate and collaborate between multiple sites, operating as a vertically integrated organization, have achieved more than a 10% gain in global market share (Marketwire, 2007).
INFOrMAtION tEcHNOLOGy AND strAtEGIEs Inspite of lots of literature and guidance available less than 10% of strategies effectively formulated are effectively executed (Kaplan and Norton, 1996). Mintzberg (1992) defined strategy as “ a plan –some sort of consciously intended course of action, a guideline (or set of guidelines) to deal with a situation. He further tried to define strategy from the perspectives of being a plan, a ploy, a position, a pattern, and a perspective (Ikavalko, and Aaltonen, 2001). Michael Porter’s (1996) definition of strategy focuses more on the outcome, “the creation of a unique and valuable position, involving a different set of activities”. He believes that a strategy is a way an organization seeks to achieve its vision and mission and that a successful strategy allows a company to capture and sustain a competitive advantage. Porter’s (1985) value chain methodology identified five key forces that impact on an organization’s competitive position:
1319
Enterprise Resource Planning Systems
• • • • •
The bargaining power of suppliers; The bargaining power of buyer; The threat of new entrants; The threat of substitute products; and Rivalry among existing organizations.
He believed that the impact of these forces could be influenced by strategies that focused on low cost provider and on product differentiation. These strategies determine how various discrete activities are performed to add value and eventually a competitive advantage. An organization’s activities can be categorised into primary activities that can directly add value and supporting activities. These interrelated activities make up Porter’s Value Chain. One of the supporting activities he identified was the technology development. Porter and Miller (1985) proposed an information intensity matrix to assist in identifying where information technology could be used strategically in the Value Chain. Later on Somogyi and Galliers (1987) supported this by identifying how information technology could be used to assist organizations to accomplish competitive advantage in the various focuses across the Value Chain. During last two decades organizations have already identified the significance of information technology in achievement of strategic objectives. Scott Morton (1991) identified five interrelated factors (Structure, Strategy, Technology, Management Processes, Individual & Roles) that influence the attainment of strategic objectives. One of these factors was information technology. A recent survey of more than 300 CEO’s and CIO’s identified the alignment of IT and business strategy as their number one priority (Beal, 2003). Tallon and Kraemer (2003) in a survey of 63 companies found there was significant value gained from the alignment of these strategies. Broadbent and Weill (1993) define this IT-business alignment as the extent to which business strategies were enabled, supported, and stimulated by information strategies. Teo and King (1997) proposed four different scenarios or degrees of integration between the business and IT strategies. These included:
1320
•
•
•
•
Administrative integration: Where there is little relationship between the business and IT strategy. Sequential integration: Where the business strategy is developed in firstly in isolation to the IT strategy. The IT strategy is then developed to support the business strategy. Reciprocal integration: This is where a reciprocal and interdependent relationship exists between both strategies. The IT strategy is used to support the influence business strategy. Full integration: Occurs when both strategies are developed concurrently in an integrated manner.
This increased need for closer alignment has resulted in companies focussing on Strategic Information Systems Planning (SISP) and the development of methodologies to support this (Pant and Hsu, 1995; Hackney et al, 2000). Hackney et al. (2000) identified the assumptions which underlie SISP and discuss their validity. According to them main assumption are: • • •
Business srategies must exist as a precursor to SISP, Business strategies are different from IT, IT and Business strategies can be aligned.
They further argue that as business strategies evolve, it is often difficult for IT strategies to respond.
ErP AND cOrPOrAtE strAtEGIEs ERP systems are widely adopted in a diverse range of organizations and define the business model on which they operate. For many organizations they were relieved that ERP system could help them define a business srategy and provide the IT infrastructure to support it (Davenport, 2000). Hackney
Enterprise Resource Planning Systems
et al. (2000) believe that ERP systems can provide a “dynamic stability” to the alignment of business and IT strategies. These systems can provide a stable predictable environment of which their usage can evolve in accordance with a company’s business strategy. Brancheau et al. (1996) observed that alignment of an informaion system (IS) with the strategic goals and operational objectives of an organization has been an important issue through the 1980s and 1990s. ERP research has exlpored how these types of systems contribute value to an organization (Markus and Tanis, 1999; Ross and Vitale, 2000; Somers and Nelson, 2001), as well as how they should be integrated with alreadyexisting IT resources (Hayman, 2000). Large and small organization’s continue to invest between $ 300,000 and hundreds of millions of dollars in ERP software and accompanying hardware (Markus, 1999). Different business justifications, including improved productivity, reduced costs, greater operational efficiency, enhanced customer relationship management, and better supply chain management (Brown and Vessey, 2003; Mabert et al, 2001). Finally, for the return on investment in ERP systems to be achieved, these systems should yield a strategic advantage. Kalling (2003) argues whether ERP systems provide a competitive advantage. Current perceptions of ERP systems, as evidenced in trade publications and the academic literature, emphasize their role in enhancing economic efficiency and improving financial performance (Dillard and Yuthas, 2006). Beard and Sumner (2004) observed that regarding competitive advantage one challenge is that ERP systems impose a ‘common systems’ approach by establishing a common set of applications supporting business operations. Successful implementation of an ERP system requires re-engineering business processes to better align with the ERP software, so that the common systems approach is imposed (Brown and Vessey, 2003; Dahlen and Elfsson, 1999). This ‘common’ structure approach allows for faster implementation of the ERP system because there are fewer
customized pieces to the software and the limited customization means that it will be simpler to upgrade the ERP software as new versions and features emerge over time (Beard and Sumner, 2004). Another challenge related with accomplishing a competitive advantage through ERP is the significant complexity of the implementation and integration process as it often takes several years to fully implement the ERP system. This includes integrating ERP with already-existing IS and accomplishing the related reengineering of the organization (Beard and Sumner, 2004). It is time consuming process to refine the alignment of organization to the ERP system and to more fully leverage the opportunities offered by the ERP system . As ERP provides both tangible and intangible benefits, many organizations consider them as essential information system infrastructure to be competitive in today’s business world and provide a foundation for future growth. A recent survey of 800 top U.S. companies showed that ERP systems accounted for 43% of these companies budgets (Somer & Nelson, 2001). The market segment of ERP systems varies considerably from industry to industry. According to Computer Economics Inc. Report 76% of manufacturers, 35% of insurance and health companies, and 24% of Federal Government agencies already have an ERP system or are in the process of installing one (Stedman, 1999). The global market for ERP software, which was $ 16.6 billion in 1998, is expected to have had 300 billion dollars spent over last decade (Carlino, 1999). There are number of vendors of ERP systems in the market although major one is SAP with approximately 56% of the market. In Australia 9 out of the top 12 IT users were SAP customers and 45% of the total list were also SAP users (BRW, 2002). Growth in ERP systems is due to several factors for example; the need to streamline and improve business processes, better manage information system expenditure, competitive pressures to become a low cost producer, increased responsiveness to customers and
1321
Enterprise Resource Planning Systems
their needs, integrate business processes, provide a common platform and better data visibility, and as a strategic tool for the move towards ecommerce (Davenport et al. 2003; Markus et al., 2001; Somer et al., 2001).
ErP systEM AND strAtEGIc ADVANtAGE Davenport (2000) believes that ERP systems by their very nature impact on a organization’s strategy, organization and culture. Mabert et al. (2001) report that companies view the standardization and integration of business processes across the enterprise as a key benefit. As one of the executives mentioned ‘ERP is the digital nerve system that connects the processes across the organization and transmits the impact of an event happening in one part to the rest accurately’ (Mabert et al., 2001). Regarding tangible benefits, the companies often observed reported lower inventories, shorter delivery cycles, and shorter financial closing cycles although, the ERP system did not lead to reductions in work force or savings in operational costs in the short term (Beard and Sumner, 2004). According to Davenport (2000) cycle time reduction (e.g. cost and time reductions in key business processes), faster information transactions (e.g. faster credit checks), better financial management (e.g. shorter financial closing cycle, improved management reporting), and laying the ground work for electronic commerce (e.g. providing the back office functions for Web-based product ordering, tracking, and delivery processes), as well as a better understanding of key business processes and decision rules are general benefits of ERP systems. Laughlin (1999) suggests that the business justification for ERP includes both hard dollar savings (e.g. reductions in procurement cost, inventory, transportation, increased manufacturing throughput, and productivity) and soft dollar savings (e.g. revenue growth, margin enhancement, and sales improvements). Inter-
1322
estingly both Mabert et al. (2001) and Laughlin (1999) observed that there is no evidence of ERP providing headcount reduction. When an ERP works well it can ‘speed up business processes, reduce costs, increase selling opportunities, improve quality and customer satisfaction, and measure results continuously (Piturro, 1999). Regarding business benefits of ERP, managers mention many productivity enhancements, including the ability to calculate new prices instantly, more accurate cost comparisons among different facilities, better electronic data interchange with vendors and suppliers, improved forecasting, and the elimination of bottlenecks and duplicative procedures (Plotkin, 1999). Other benefits deal with eliminating the redundancies associated with leagcy systems (Beard and Sumner, 2004). For example, at Owens Corning, there were 200 legacy systems, most running in isolation from one another. Eastman Kodak had 2600 different software applications (Palaniswamy and Frank, 2000). Organizations with ERP systems have made improvements in cross functional coordination and business performance (Oliver, 1999). According to Markus et al. (2000) larger value of ERP is measured when the organization captures actual business results (for example reduced inventory costs), but these results don’t occur until the phase in which the systems have already been successfully implemented and integrated into business operations-in referred by Markus et al. (2000) as ‘onward and upward phase’- a stage three evolution. Holland and Light (2001) also support that the business benefits of ERP occur in a ‘third stage’ of evolution, during which innovative business processes are thoroughly implemented. As observed by Beard and Sumner (2004) that ERP may not necessarily directly provide organizations with a competitive advantage through reduction of these organizations cost below or by increasing these organizations’ revenues above what would have been the case if these systems had not been implemented. They further argued that the advantages mentioned are
Enterprise Resource Planning Systems
largely value-added measures, such as increased information, faster processing, more timely and accurate transactions, and better decision-making.
cONcLUsION AND FUtUrE trENDs ERP systems are increasingly popular IT platform that are being installed to assist organizations in better capturing, managing, and distributing organization-wide operational data to decision
Figure 1. Benefits of ERP
makers throughout the organization (Beard and Sumner, 2004). They further discussed that sustainable competitive advantage with an ERP system, important characteristics that will create sustained competitive advantage and reengineering the organization without disrupting or destroying the characterisctics that gave it a competitive advantage are further research areas to know more about effects and strategic advantages of ERP. These may be include empirical, qualitative or both the approaches in the organization. Comparison of ERP implementation effects and strategic advantages in two organizations may be an interesting area. Regarding limitation and concern as observed by Lee and Myers (2004) is that ERP systems are so large and complex and take years to implement, the inclusion of today’s strategic choices into the enterprise systems may significantly constrain future action. By the time the implementation of an ERP system is finished, the strategic context of the firm may have changed.
rEFErENcEs Beal, B. (2003). The priority that persists. Retrieved November, 2003, from SearchCIO. com Web site: http://searchcio.techtarget.com/ originalContent/0,29142,sid19_gci932246;00. html Beard, J. W., & Sumner, M. (2004). Seeking strategic advantage in the post-net era: viewing ERP systems from the resource based perspective. The Journal of Strategic Information Systems, 13, 129–150. doi:10.1016/j.jsis.2004.02.003 Bingi, P., Sharma, M., & Godla, J. (1999). Critical issues affecting ERP implementation. Information Systems Management, 16(3), 7–14. doi:10.1201/ 1078/43197.16.3.19990601/31310.2
1323
Enterprise Resource Planning Systems
Brancheau, J. C., Janz, B. D., & Wetherbe, J. C. (1996). Key issues in information system management: 1994-95 SIM Delphi results. MIS Quarterly, 20(2), 225–242. doi:10.2307/249479 Broadbent, M., & Weill, P. (1993). Improving business and information strategy alignment: learning from the banking industry. IBM Systems Journal, 32(1), 162–179. Brown, C. V., & Vessey, I. (2003). Managing the next wave of enterprise systems: Leveraging lessons from ERP. MIS Quarterly Executive, 2(1), 65–77. Carlino, J. (1999). AMR Research Unveils Report on Enterprise Application Spending and Penetration, at www.amrresearch.com/press/files/99823. asp accessed July 2001. Communications of the ACM, 2000 Dahlen, C., & Elfsson, J. (1999). An analysis of the current and future ERP market- with a focus on Sweden. http://www.pdu.se/xjobb.pdf (accessed April 24, 2003) Davenport, T. (1998). Putting the enterprise into the enterprise system. Harvard Business Review, 76(4), 121–131. Davenport, T. (2000). Mission Critical – Realizing the Promise of Enterprise Systems. Boston, MA: Harvard Business School Press Davenport, T., Harris, J., & Cantrell, S. (2003). Enterprise Systems Revisited: The Director’s Cut. Accenture. Dillard, J. F., & Yuthas, K. (2006). Enterprise Resource Planning Systems and Communicative Actions. Critical Perspectives on Accounting, 17(2-3), 202–223. doi:10.1016/j.cpa.2005.08.003 Hackney, R., Burn, J., & Dhillon, G. (2000). Challenging Assumptions for Strategic Informations Systems Planning: Theoretical Perspectives. Communications of the Association for Information Systems, 3(9).
1324
Hayman, L. (2000). ERP in the Internet economy. Information Systems Frontiers, 2, 137–139. doi:10.1023/A:1026595923192 Holland, C., & Light, B. (2001). A Stage maturity model for enterprise resource planning use. Databse for Advances in Information Systems, 35(2), 34–45. Ikavalko, H., & Aaltonen, P. (2001). Middle Managers’ Role in Strategy Implementation – Middle Managers View. In the proceedings of 17th EGOS Colloquium, Lyon France. Jensen, R., & Johnson, R. (1999). The enterprise resource planning system as a strategic solution. Information Strategy, 15(4), 28–33. Jutras, C. (2007) The Role of ERP in Globalization available at http://www.aberdeen.com/summary/ report/benchmark/RA_ERPRoleinGlobalization_CJ_3906.asp Kalling, T. (2003). ERP systems and the strategic management processes that lead to competitive advantage. Information Resources Management Journal, 16(4), 46–67. Kaplan, R., & Norton, D. P. (1996). Using the balance score-card as a strategic management system. Harvard Business Review, (Jan/Feb): 75–85. Laughlin, S. P. (1999). An ERP game plan. The Journal of Business Strategy, 20(1), 32–37. doi:10.1108/eb039981 Lee, J. C., & Myers, M. D. (2004). Dominant actors, political agendas, and strategic shifts over time: a critical ethonography of an enterprise systems implementation. The Journal of Strategic Information Systems, 13, 355–374. doi:10.1016/j. jsis.2004.11.005 Lee, Z., & Lee, J. (2000). An ERP implementation case study from a knowledge transfer perspective. Journal of Information Technology, 15, 281–288. doi:10.1080/02683960010009060
Enterprise Resource Planning Systems
Luo, W., & Strong, D. M. (2004). A Framework for evaluating ERP implementation choices. IEEE Transactions on Engineering Management, 51(3), 3222–3333. doi:10.1109/TEM.2004.830862 Mabert, V. A., & Soni, A., & venkataramanan, M. A. (2001). Enterprise resource planning: Common myths versus evolving reality. Business Horizons, 44(3), 69–76. doi:10.1016/S00076813(01)80037-9 Marketwire (2007). Thinking Global? Don’t Lose Sight of Profitable Growth available http://www. marketwire.com/mw/release_html_b1?release_ id=224493 Markus, L., Petrie, D., & Axline, S. (2001). Bucking The Trends, What the Future May Hold For ERP Packages, in Shanks, Seddon and Willcocks (Eds.) Enterprise Systems: ERP, Implementation and Effectiveness, Cambridge University Press. Markus, M. L. (1999)Keynote address: Conceptual challenges in contemporary IS research. Proceedings of the Australasian Conference on Information Systems (ACIS), New Zeland, pp.1-5. Markus, M. L., & Tanis, C. (1999). The enterprise systems experience- from adoption to success, In: Zmud, R.W., (Ed.), Framing the Domains of IT Research: Glimpsing the Future Through the Past, Piinaflex Educational Resources, Cincinnati, OH. Mintzberg, H. (1992). Five Ps for Strategy. In the Strategy Process, H Mintberg and JB Quinn (eds.). Englewood Cliffs, NJ: Prentice –Hall Newell, S., Haung, J. C., Galliers, R. D., & Pan, S. L. (2003). Implementing enterprise resource planning and knowledge management system in tandem: Fostering efficiency and innovation complementarity. Information and Organization, 13, 25–52. doi:10.1016/S1471-7727(02)00007-6
Palaniswami, R., & Frank, T. (2000). Enhancing manufacturing performance with ERP systems. Information Systems Management, 17(3), 43–55. Pant, S., & Hsu, C. (1995) Strategic Information Systems: A Review. In the proceedings of the 1995 IRMA conference, Atlanta, Georgia. Piturro, M. (1999). How midsize companies are buying ERP. Journal of Accountancy, 188(3), 41–48. Plotkin, H. (1999). ERP: How to make them work. Harvard Management Updat,e March, 3-4. Porter, M. (1985). Competitive Advantage: Creating and Sustaining Superior Performance. New York: Free Press Porter, M. (1996). What is Strategy? Harvard Business Review, November-December. Porter, M., & Miller, V. (1985). How Information Gives You Competitive Advantage. Harvard Business Review, 63, 4. AMR Research, (2002). AMR Research predicts enterprise applications market will reach $ 70 billion in 2006. AMR Research. Online available at www. amrresearch.com Ross, J. W., & Vitale, M. R. (2000). The ERP Revolution: Surviving vs. Thriving. Information Systems Frontiers, 2, 233–241. doi:10.1023/A:1026500224101 Scott, M. (1991). The Corporation of the 1990s: Information Technology and Organizational Transformation. Oxford University Press, Cambridge. Shanks, G., & Seddon, P. (2000). Editorial. Journal of Information Technology, 15, 243–244. doi:10.1080/02683960010008935
Oliver, R. (1999). ERP is dead, long live ERP. Management Review, 88(10), 12–13.
1325
Enterprise Resource Planning Systems
Somer, T., & Nelson, K. (2001). The impact of Critical Success Factors across the Stages of Enterprise Resource Planning System Implementations. Proceedings of the 34th Hawaii International Conference on System Sciences, 2001, HICSS. Somogyi, E., & Galliers, R. (1987). Towards Strategic Information Systems. Cambridge: Abacus Press. Stedman, C. (1999). What’s next for ERP? Computerworld, 33(33). Tallon, P., & Kraemer, K. (2003). Investigating the relationship between Strategic Alignment and IT Business value: The Discovery of a Paradox, Relationship Between Strategic Alignment and IT Business Value. Idea Group Publishing, Hershey USA. Teo, T., & King, W. (1997). Integration between business planning and information systems planning: an evolutionary contingency perspective. Journal of Management Information Systems, 14.
KEy tErMs AND DEFINItIONs Business Benefits: Also known as: business benefits of ERP. Similar to: Combined tangible and intangible benefits to organizations due to ERP implementation. Associated in the manuscript with: Effects and benefits of ERP implementation Business Processes: Also known as IT-enabled
business processes. Similar to integration of business processes, associated in the manuscript with business processes to better align with the ERP software. Competitive Advantage: Also known as organization’s sustains profits that exceed the average said to possess competitive advantage. Similar to a competitive advantage is an advantage over competitiors by offering consumers greater value either by means of lower prices or by providing greater benefits associated in the manuscript with: competitive advantage through ERP ERP: Also known as Enterprise Resource Planning, similar to systems attempt to integrate all data and processes of an organization into a unified system associated in the manuscript with Effects and Strategic Perspectives in Organizations Global: Also known as globalization, similar to Multinational Organizations, Organizations distributed at international level associated in the manuscript with global market for ERP software, global marketplace. Integrated: Also known as combining different functional aspects of the organization similar to integrated application suite associated in the manuscript with: integrated software system linking the core business activities such as operations, manufacturing, sales, accounting, human resources, and inventory control.
This work was previously published in Handbook of Research on Enterprise Systems, edited by Jatinder N. D. Gupta, Sushil Sharma and Mohammad A. Rashid, pp. 57-66, copyright 2009 by Information Science Reference (an imprint of IGI Global).
1326
1327
Chapter 5.10
The Management of CRM Information Systems in Small B2B Service Organisations: A Comparison between French and British Firms Călin Gurău GSCM – Montpellier Business School, France
Abstract The new communication and information systems have significantly increased the possibilities offered to professional companies for developing and maintaining long-term customer relationships. However, technology alone cannot ensure the success of CRM strategies. The implementation of a customer-centred culture, shared by the entire professional organisation, requires the combination of human resources, expertise and technology in order to identify and satisfy the needs of the existing customers. Considering a sample of French and UK professional SMEs, this chapter investigates the type of CRM strategy implemented by these firms, as well as the usage intensity of various communication channels, both by companies and clients. The satisfaction of client organisations is analysed from a multi-level perspective and a diagnostic procedure DOI: 10.4018/978-1-60566-892-5.ch025
is proposed in order to identify the gap between the perceptions service provider firms and clients on various dimensions of the CRM process.
Introduction The development of new Information Technology and Telecommunication (ITT) systems, such as the Internet or mobile phones, has opened new possibilities for improving the relationship between service providers and clients (Brechbühl, 2004; Smith, 2000). Many studies (Iyer, 2003; Kalakota & Robinson, 2001; Leger, 2000; Zeng, Wen & Yen, 2003) have emphasised that, in a digital economy, the quality of customer-company interactions represents a complex combination between the feasibility and usability of the ITT systems used as interaction channels, and the efficiency of the CRM procedures implemented by the firm.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Management of CRM Information Systems in Small B2B Service Organisations
On the other hand, the interaction between clients and service providers in the Business-toBusiness (B2B) market, often takes place through a variety of channels, both digital and non-digital. In these conditions, customer satisfaction will be determined by the capacity of the firm to manage effectively multi-channel customer interactions, integrating CRM procedures with channel management (Johnson, 2002). Significant research has been conducted on consumer’s use of various communication channels and the relative satisfaction level of customers. Iyer (2003) presented a comparison between the level of satisfaction associated with the use of various communication channels for accessing customer services. 62% of respondents have associated a high level of satisfaction with online chat, followed by 49% of respondents for in-person communication and 46% for telephone interaction. At the other end of the scale, postal mail and fax communication were associated with a high level of satisfaction only by 24% of respondents. This chapter presents a study of B2B service interactions, which attempts to identify the main strategies related with the management of CRM information systems, and to measure the preference of both service providers and client firms for various channels of interaction. The digital communication channels are considered together with the traditional channels in the context of CRM applications implemented by service organisations (web design and consulting firms). The company-customer interaction is treated as a multi-dimensional process, which involves a systemic side – the ITT system implemented by the firm, and a procedural aspect – the CRM procedures applied by the service provider in various stages of its interaction with the customer. Considering this approach, the study has the following research objectives: 1. )>> To study the main strategies for managing the CRM information systems used by small B2B service organisations.
1328
2. )>> To identify the communication channels used by firms and their integration with customer management procedures. 3. )>> To analyse the level of satisfaction determined by the interaction between service organisations and client firms in relation with the use of ITT and ITT-based customer management procedures. For a better understanding of the overall context of B2B interactions, both digital and non-digital communication channels have been analysed and evaluated, in relation with the CRM procedures implemented by service provider organisations. After a brief discussion of the previous studies published in this area, the chapter presents the research methods applied to collect primary and secondary data. The research data are then analysed and presented in direct connection with the formulated research objectives. The chapter concludes with a synthetic discussion of the main findings, which are used to propose a diagnostic procedure for measuring the level of satisfaction of company-customer interactions, analytically developed on the ITT and CRM dimensions.
Background: information systems and applications for CRM The development of new CRM technology applications increased the capacity of firms to manage more efficiently their customers. These applications, properly run on the available ITT platforms, can link the customer interface with front office operations – sales, marketing, customer service, etc. and with the back office support - logistics, operations, human resources, etc. (Chen & Popovich, 2003). However, the structure and the functionality of the CRM information system will be different from one company to another, depending on its specific activity profile and strategic objectives.
The Management of CRM Information Systems in Small B2B Service Organisations
Some organisations consider the CRM system as a simple technology solution to integrate database and sales automation in order to reduce the cost of transactions and marketing activities. Others define it as a communication tool that permits the customisation of companycustomer interaction (Peppers & Rogers, 1999 and 20012001). However, both these descriptions are limitative. The CRM system must be based on the implementation of a customer-centric culture which should permeate the entire organisation, and modify, according to its logic, the strategy, structure and operations of the entire enterprise (Rheault & Sheridan, 2002). The interface between the organisation and its customers is represented by a number of ‘touch points’, in which takes place the interaction between customers and organisational systems (Fickel, 1999). These touch points can include fixed and mobile phone, fax, Internet, email, online discussion forums, or face-to-face encounters. Traditionally, these touch points were controlled by separate information systems, and sometimes, by different organisational departments. The revolution introduced by CRM applications is the capacity to collect the data from all these touching points into a centralised database that can be dynamically accessed by any person within the organisation (Eckerson & Watson, 2000). For an effective implementation and use of the CRM system, these touch points have to function properly. Iyer (2003) has investigated the preference and the level of satisfaction of the customers that have accessed customer service facilities, including in his study both digital and non digital communication channels . The findings indicate a large variability both in the number of clients using various communication channels, as well as in their level of satisfaction. The data collected from these touch points has to be then organised and archived in a centralised database, which should be capable to process large volumes of data coming from multiple, heterogeneous channels (Eckerson & Watson, 2000).
The ITT system and data warehousing technology represent useful tools for implementing a customer-centric CRM system. However, the way in which these are used by company employees to build and maintain satisfactory relationships with the customers represent the central element of the CRM philosophy (Metallo, Cuomo & Festa, 2007). Some researchers have even outlined the danger of alienating the customers by using an excessive technological approach to relationship marketing (Hughes, Foss, Stone & Cheverton, 2007). Particularly in service organisations, customers expect to be managed in a personalised way by a company representative, who can keep track of their specific needs and problems. HarrisonWalker and Neeley (2004) attempted to develop a typology of customer relationship building on the Internet in B2B marketing, by combining the various stages of the purchase decision process with the levels of relationship marketing proposed by Berry and Parasuraman (1991). Despite the merits of this model, the real-life applications may be somehow different, depending on the specific situation of the market and on the strategic orientation of the firm. For example, the small and medium-sized enterprises might not have the necessary resources to implement and apply all the stage of this model. On the other hand, in some industries the customers themselves might demand a specific type of relationship. Simon (2005) argues that B2B Professional Services firms should implement a customer service management framework based on key account managers who have professional skills rather that relying on complex technological systems. The specific nature of this sector requires a good knowledge of ethical and professional issues, as well as the capacity to treat every customer as a separate case; this approach is often not compatible with automatic, technological-based systems. Despite the clear importance of professional advisors, the technology can play a big role in increasing the efficiency of company-customer relationships, especially for effective communica-
1329
The Management of CRM Information Systems in Small B2B Service Organisations
tion and data collection. This chapter investigates the importance of both technology and personal CRM procedures, for business customers’ satisfaction, in B2B Professional Services of business consulting and web design. In order to understand the difference between various national markets, this study analyses data collected about UK and French small and medium-sized professional service firms.
Research methodology In order to evaluate the CRM strategies applied by professional SMEs, both secondary and primary data have been collected and analysed. The academic literature and professional reports dealing with CRM implementation have been accessed in order to understand better the main issues related with this topic. Following this literature review, primary data has been collected during SpringSummer 2006, in four distinct stages: (1) )>> 100 web design firms and 150 consulting firms, randomly selected from the yellow pages national directories of France and the UK, have been contacted through email or telephone and invited to participate to this study. 39 web design firms and 68 consulting firms responded affirmatively to this request in France, while in the UK 44 web design firms and 69 consulting firms agreed to provide information for this study. 12 of these companies (6 French and 6 British, 3 from each of the two sectors of activity) were contacted for a detailed pilot study based on telephone or face-to-face interviews, in order to design and finalise the list of research questions. (2) )>> Following the pilot study phase, a semistructured questionnaire was sent to the participating firms by email, and when necessary, a reminder telephone call was applied. The questionnaire was returned by
1330
103 French firms and by 111 British firms. When necessary, additional clarifications of qualitative nature were asked through additional email messages. Each of the respondent firms provided the name and the contact details of three of their most important organisational clients. (3) )>> After eliminating the double entries (some firms were client to both web design and consulting firms), 131 French client companies and 153 British client firms have been contacted through email or telephone and invited to participate in the study. A pilot study was also conducted with 12 of these companies, in order to design and finalise the list of research question. (4) )>> A semi-structured questionnaire was then sent to the client companies, supported by a reminder telephone call. 102 questionnaires were returned by French firms and 123 by British firms. Once again, when necessary, additional clarifications of qualitative nature were required through additional email messages. The questionnaire sent to services companies (web design and consulting) contained questions related to: •)>> •)>> •)>>
the company profile: number of employees, sector of activity, number of customers; the main communication channels used for company-customer interaction and their perceived effectiveness; the main CRM strategies designed and implemented by professional services organisations, both from the point of view of managing the touch points with the customers, as well as considering the customer relationship procedures applied by company’s personnel.
The questionnaire sent to client companies comprised the following categories of questions:
The Management of CRM Information Systems in Small B2B Service Organisations
•)>> •)>>
•)>>
the company profile: number of employees, sector of activity, year of incorporation; the relation with the service organisation/s: how the relation was initiated, the length and the evolution (stages) of the relation, the general level of satisfaction; the level of satisfaction related with various aspects of the service provider-client interaction (the level of interactivity allowed by the information and telecommunication technologies used by the service company, the effectiveness of key account or customer relationship management procedures).
The collected data were analysed both quantitatively and qualitatively. From a qualitative point of view, a content analysis has been applied to the received answers, providing a picture of the main stages of company-customer interaction, of the main factors influencing the quality of these interactions, as well as of the specific application of CRM techniques. The quantitative aspects of the collected data have been analysed using the SPSS software, applying cross-tabulations.
Presentation and analysis of data The interaction between service organisations and client firms can be separated into two distinct stages: (1) the initial contact and (2) the transactional period.
1. The Integration of ITT Applications and Customer Management for the Initial Contact Stage The quality of ITT applications (web site, email, fax, telephone and mobile phone) is essential for initiating the interaction between the service organisation and the client firm. Both categories of respondents – firms and clients - emphasised
that often the service organisation is selected through an Internet search, or using printed professional databases (such as the Yellow Pages). The potential client attempts then to find relevant information about the selected firm, in order to evaluate its capabilities and level of expertise. This information can be often found readily available on the service organisation web site. On the other hand, the prospective client also wants to contact the service organisation in order to complement the online information or even to make sure that the service organisation is easily reachable and readily available to the requests of their clients. All service organisations that provided primary data emphasised the importance of the initial contact stage for attracting the client and projecting the desired professional image. The communication with the client firm is either co-ordinated by a centralised reception service, or customers can directly contact a professional expert. The type of interaction during the initial contact stage is significantly influenced by the sector of activity, the size, and the number of customers of the service organisation (see Table 1). In France, both centralised reception and customer managers are used primarily by consulting organisations in comparison with web design firms. On the other hand, for the UK firms, the centralised reception is preferred by consulting firms (57.5% of the investigated firms), while the customer manager contact is implemented by two thirds (66.2%) of the web design companies. The reason for this different approach is the possible complexity of the problem raised by the client firm. Often, the consulting firm needs to collect and analyse preliminary data in order to correctly understand what are the needs of the client firm, and how they can be satisfied. Only after this stage the project is allocated to the appropriate expert or group of experts within the consulting firm. Company size is strongly influencing the level of available resources, especially in terms of time and personnel. On the other hand, the larger firms may have a more structured customer interaction
1331
The Management of CRM Information Systems in Small B2B Service Organisations
Table 1. The type of initial contact service implemented by the respondent professional service organisations Centralised reception
Customer manager
France N%
UK N%
France N%
UK N%
Web design
23 46.9
17 42.5
23 42.6
47 66.2
Consulting
26 53.1
23 57.5
31 57.4
24 33.8
Micro firms
13 26.5
10 25
28 51.9
26 36.6
Small firms
17 34.7
23 57.5
12 22.2
25 35.2
Medium-sized firms
19 38.8
7 17.5
14 25.9
20 28.2
Less than 10 clients
7 14.3
3 7.5
22 40.7
21 29.6
10 to 30 clients
22 44.9
18 45
15 27.8
41 57.7
More than 30 clients
20 40.8
19 47.5
17 31.5
9 12.7
Total
49 100
40 100
54 100
71 100
service, with a centralised reception. This is certainly true for the French firms – a larger percentage of small-size and medium-size firms prefer to use a centralised reception service for the first contact with clients, while in the UK, the small firms show a large majority in implementing the centralised reception service. The micro-size firms tend to have a more direct and informal contact with clients in comparison with small and medium-sized companies, using a customer manager as the first contact with the client organisations. This can be explained by a strategic approach based on the customisation of company-customer interaction, in order to take advantage of the flexibility offered by the small organisational size. The organisation of the centralised reception service seems also to represent a choice of the organisations with more clients. In fact, both in France and in the UK, the larger proportion of organisations that implement a centralised customer reception have more than 10 clients. On the other hand, in France, the firms with less than 10 clients are clearly preferring to use a customer manager for the first customer contact; while in the UK are rather the firms with 10 to 30 clients that adopt this customised approached.
1332
2. The Usage Intensity of Various Communication Channels The service organisations indicated the intensity of use of various communication channels during their interactions with client organisations (see Table 2). Email, telephone, mobile phone, and faceto-face communication represent high intensity communication channels for a large majority of professional service organisations (more than 64% of respondents). On the other hand, fax and postal mail, although used in some measure by all the respondent organisations, seem to have a lower intensity of use. The intensity of use is different between the French and the UK service organisations. The UK firms are characterised by a slightly higher intensity of use of the email, telephone, mobile phone and face-to-face communication, but a lower intensity of use of the fax and postal mail. It can be concluded that in terms of communication, the UK firms adopt a more flexible, and at the same time, a more customised approach in the interaction with their clients. This trend was confirmed by the explanations provided during
The Management of CRM Information Systems in Small B2B Service Organisations
Table 2. The intensity of use of various communication channels by client organisations to interact with their service providers Communica-tion channels / Intensity of use
Email N%
Telephone N%
Mobile phone N%
Fax N%
Face-to-face N%
Postal mail N%
France High
88 85.4
67 65
74 71.8
8 7.8
66 64.1
43 41.7
Medium
15 14.6
36 35
24 23.3
83 80.6
35 34
38 36.9
Low
00
00
5 4.9
12 11.6
2 1.9
22 21.4
Total
103 100
103 100
103 100
103 100
103 100
103 100
106 95.5
111 100
98 88.3
3 2.7
102 91.9
29 26.1
UK High Medium
5 4.5
00
7 6.3
84 75.7
7 6.3
75 67.6
Low
00
00
6 5.4
24 21.6
2 1.8
7 6.3
Total
111 100
111 100
111 100
111 100
111 100
111 100
the interviews, the UK professionals emphasising the need for ‘ease of use’, ‘flexibility’, ‘mobility’, and ‘interactivity’, in order to better service their clients and to obtain a competitive advantage in a highly fragmented market.
3. The Organisation of the CRM Function The respondent service organisations use three CRM models: a. )>> decentralised CRM – each customer is managed by a key account manager; b. )>> centralised CRM – the information concerning each customer is organised in a centralised archive; c. )>> integrated CRM system – a digital customer database is integrated with online customer applications. The implementation of a particular system is influenced by the specific activity of the firm, its size and its number of customers (see Table 3). The web design firms, probably because of their professional expertise in information systems, implement in a larger proportion an integrated
CRM system, in comparison with the consulting firms. On the other hand, the complex nature of consulting projects can represent a reason for the use of a decentralised database, managed by the customer manager in charge of the project. In fact, the majority of consulting firms, both in France and in the UK, use a decentralised database. On the other hand, the centralised CRM is used mainly by consulting firms in France, and by web design firms in the UK. The reasons indicated by the French firms for this choice is the possibility to have more consultants servicing the same clients, while in the UK consulting firms this is happening less often. The micro organisations have predominantly implemented a decentralised CRM system. At the other side of the scale, the integrated CRM system is preferred by a majority of small and mediumsized firms. This choice is determined partly by the level of available resources and partly by the strategic approach adopted by service organisations. A decentralised system is less expensive and increases the customisation of the service, based on a long-term relationship between the service specialist and his/her client. On the other hand, the centralised and the integrated CRM systems require more resources for their implementation,
1333
The Management of CRM Information Systems in Small B2B Service Organisations
Table 3. The main CRM strategies used by the investigated firms Decentralised CRM
Centralised CRM
Integrated CRM
France N%
UK N%
France N%
UK N%
France N%
UK N%
Web design
3 15.8
4 12.5
18 48.6
27 77.1
25 53.2
33 75
Consulting
16 84.2
28 87.5
19 50.4
8 22.9
22 46.8
11 25
Micro firms
12 63.2
21 65.6
19 51.4
13 37.1
10 21.3
2 4.5
Small firms
5 26.3
9 28.1
10 27
15 42.9
14 29.8
16 36.7
Medium-sized firms
2 10.5
2 6.3
8 22.6
7 20
23 48.9
26 59.1
Less than 10 clients
14 73.7
19 59.3
13 35.1
2 5.7
2 4.3
3 6.8
10 to 30 clients
5 26.3
11 34.4
10 27
4 11.4
22 46.8
14 31.8
More than 30 clients
00
2 6.3
14 37.8
29 82.9
23 48.9
27 61.4
Total
19 100
32 100
37 100
35 100
47 100
44 100
but then permit an increased flexibility for managing the customer within the organisation. The situation is similar regarding the number of clients of the service organisations: the organisations with a small number of clients (less than 10) are predominantly using a decentralised CRM system, which permits a closer relationship between the client and the customer manager. On the other hand, the service organisations that interact with a medium or large number of clients need a centralised or an integrated customer database in order to efficiently co-ordinate customer relationships on a dynamic basis. This strategic choice is clear in the case of UK service organisations, while the French firms show a more balanced distribution in relation to the centralised and the integrated CRM system.
4. The Intensity of Use of the Main Communication Channels from the Client Firms Perspective The values displayed in Table 4 represent important terms of comparison with the same type of data
1334
provided by the personnel of service organisations in Table 2. Any discrepancy between the level of intensity allocated for various communication channels by service organisations and customers indicates a possible source of problems and dissatisfaction. In France, the client organisations indicate a more intensive use of the telephone, mobile phone and face-to-face communication than that of the service firms, but attach a lower importance to email, fax and postal communication. In UK, the differences between professional and service organisations are less important; however, the clients show a clear preference for mobile phone (all client organisations indicate a high intensive use), but less for email and fixed phone. The importance of interactivity, mobility and customisation is once again emphasised by these results. On the other hand, the findings can also be used as an indication of the optimum combination of various communication channels in a complex communication portfolio, properly calibrated to the needs and wants of organisational clients.
The Management of CRM Information Systems in Small B2B Service Organisations
Table 4. The intensity of use of various communication channels by client organisations to interact with their service providers Communication channels / Intensity of use
Email N%
Telephone N%
Mobile phone N%
Fax N%
Face-to-face N%
Postal mail N%
France High
78 76.5
93 91.2
85 83.3
00
83 81.4
23 22.5
Medium
24 23.5
9 8.8
14 13.7
63 61.8
17 16.7
73 71.6
Low
00
0
33
39 38.2
2 1.9
6 5.9
Total
102 100
102 100
102 100
102 100
102 100
102 100
98 79.7
111 90.2
123 100
12 9.7
114 92.7
16 13
UK High Medium
17 13.8
12 9.8
00
90 73.2
9 7.3
98 79.7
Low
8 6.5
00
00
21 17.1
00
9 7.3
Total
123 100
123 100
123 100
123 100
123 100
123 100
5. The Level of Satisfaction of Client Organisations In France, less then half (44.1%) of the client organisations are highly satisfied by their overall interaction with service organisations; however, on the other hand, only 2% have indicated a low level of satisfaction (see Table 5). The satisfaction related to ITT equipment and connections has the higher levels: 55.9% of the respondent client firms are highly satisfied, and 44.1% medium satisfied. It seems that the low satisfaction of the French client firms is mainly related with the performance of the CRM system. This area has the lowest percentage of highly satisfied client firms (41.2%), and 2 respondents have indicated a low level of satisfaction. The situation is very similar for the UK organisations, although the percentage of client firms that indicate a high level of satisfaction is slightly higher than that of French firms. However, it is interesting to note that the number of UK client firms that are not satisfied with the services of professional firms is much higher than in France (3.3% of firms indicate a low level of general satisfaction, 8.1% are not satisfied with the ITT
connections, and 7.3% consider that their level of satisfaction with CRM services is low). Many respondents have outlined their perception that a failure of the interaction process is usually determined by a poor organisation of CRM procedures, and not by the functioning of ITT systems. However, even when communication is defective as a result of an ITT malfunction, the situation is interpreted as a defective CRM Table 5. The level of satisfaction of the client organisations in various areas of evaluation Areas of evaluation / Level of satisfaction
General N%
ITT N%
CRM N%
France High
45 44.1
57 55.9
42 41.2
Medium
55 53.9
45 44.1
58 56.8
Low
22
00
22
Total
102 100
102 100
102 100
62 50.4
67 54.5
58 47.2
UK High Medium
57 46.3
46 37.4
56 45.5
Low
4 3.3
10 8.1
9 7.3
Total
123 100
123 100
123 100
1335
The Management of CRM Information Systems in Small B2B Service Organisations
organisation and co-ordination. This indicates the importance of a good integration between ITT systems and CRM procedures, but also the necessity of approaching the management of the ITT system not only as a technical problem, but rather as a customer relationship issue. This requirement is consistent with the re-design of corporate structures and functions around a customer-centred approach, and confirms the conclusions of Simon (2005) and of Stan et al. (2007).
6. Dimensions of Client-Service Provider Interactions In the information provided for this study, the client firms have emphasised the main characteristics defining a highly satisfactory interaction with service providers: •)>>
visibility: the information about the service organisation and its activities, as well as complete contact details, should be easily available;
•)>>
reliability: the information provided by the service organisation should be error-free and updated;
•)>>
•)>>
•)>>
accessibility and mobility: the customer managers should be easily accessible through a variety of communication channels, both digital (email, mobile phone) and non-digital (face-to-face, telephone); efficiency: the specific information required by the customer should be quickly provided; usefulness and customisation: the information presented to the customer should be adapted to its particular needs and profile.
These dimensions are not specifically related with ITT or CRM, but rather represent a synthetic perception of the effective functioning of the communication system of service organisations. For each of these dimensions, specific requirements can be identified for the ITT system and CRM procedures (see Table 6). The requirements listed in this table outline the importance of an effective integration between ITT systems and CRM procedures, the two elements representing respectively the organisational hardware and software of a well-organised professional firm. Any imbalance between these two sides of company-customer interaction has the potential to determine specific crises situations that can reduce customer satisfaction and trust. Therefore,
Table 6. The specific requirements of each interaction dimension in the ITT and CRM contexts Dimension
ITT context
CRM context
Visibility
The company is reachable through various communication channels; detail contacts are easily available on various digital and nondigital supports.
The CRM is co-ordinating a multi-channel interaction with customers, insuring a large diffusion of contact details, and their correctness.
Reliability
The communication lines with the company are functioning well.
The information provided by the company is correct and up-to-date.
Accessibility and mobility
ITT lines permit an easy communication with the company, through fixed and mobile devices.
The CRM is co-ordinating the data collection and integration through various fixed and mobile communication devices.
Efficiency
Quick ITT connection.
Waiting times to obtain information reduced to the minimum possible.
Usefulness and customisation
Communication through a customised mix of ITT device.
Personalised information/solutions provided to the customer.
1336
The Management of CRM Information Systems in Small B2B Service Organisations
the service providers should develop simple, but effective diagnostic tools, capable to present the existing situation of these requirements and to indicate areas of possible improvement.
Future trends: a diagnostic procedure for evaluating the effectiveness of B2B interactions Based on the information provided by respondents, it is possible to suggest a diagnostic procedure for evaluating the effectiveness of B2B interactions. The interaction dimensions presented above are difficult to evaluate objectively. The perceptions regarding the level of effectiveness of each dimension will vary from respondent to respondent. However, this one-to-one approach is in line with a customer-centric orientation that has to be projected in all the operations of the service provider. On the other hand, since the interaction is a bidimensional process between the service provider and the client, the perceptions regarding the effectiveness of communication should be measured at both ends of the interaction channel. Therefore, the evaluation of each dimension will be made by both the personnel of the service provider, and by the client organisation. Finally, since the final perception represents an effect of the complex interaction between ITT systems and CRM procedures, both these aspects should be included in the diagnostic framework. The proposed diagnostic procedure has three main stages: 1. )>> A clear definition of the five dimensions characterising the B2B interaction, and their interpretation in the context of ITT and CRM systems, should be established by the customer relations department. This definition can be reached through repeated meetings of the service provider organisation
with a panel of clients, when the interaction procedures are standardised for most of the customers, or with each individual client when the customer approach is highly personalised. 2. )>> An evaluation of each dimension should be made by the personnel of the service provider and represented on a bi-dimensional graph, comprising both the ITT and the CRM aspects. Each dimension will be measured on a scale from -5 to +5, although the division of scales can be adapted to the needs of the organisations. Using the two measurements given for each dimension a vector can be drawn, as represented in Figure 1. 3. )>> Each client organisation should then evaluate the five interaction dimensions, which are represented as vectors on the bi-dimensional graph used to display the perception of the service provider personnel. This graphic representation permits an easy identification of the main problems related with company-customer interactions, as well as the possible differences between the perceptions of the service provider personnel and the client organisation (see Figure 2). The causes of these perceptions should be carefully identified, in order to provide a realistic map of the present situation. This diagnostic can than provide the starting point for the improvement of companycustomer interactions, through specific actions designed and applied at ITT and/or CRM level.
Conclusion The quality of company-customer interactions determines the level of satisfaction of client organisations, and, as a consequence, the length and the value return of the B2B relationships. The development of advanced ITT devices offers today multiple possibilities for B2B interactions, but, on the other hand, increases the pressure on service
1337
The Management of CRM Information Systems in Small B2B Service Organisations
Figure 1. The representation of an interaction dimension as a vector on a bi-dimensional graph
organisations to adopt a multi-channel model, integrated with a customer-focused approach. This paper attempted to identify the type and the characteristics of B2B interactions between professional service organisations and client firms, and to evaluate the level of satisfaction of the client organisations, using a sample of companies
from France and the UK. The preference of client organisations regarding various communication channels contradicts in some aspects the intensity of use indicated by professional service firms. This indicates the need for a complex evaluation of the main communication procedures and the integration of all company-customer interactions
Figure 2. The representation of an interaction dimension as a vector on a bi-dimensional graph, both from the perspective of the client organisation and of the service provider
1338
The Management of CRM Information Systems in Small B2B Service Organisations
into a multi-channel system. On the other hand, although the client firms seem quite satisfied by their interaction with professional organisations, in a number of cases the indicated level of satisfaction was low, mainly in relation to the CRM procedures and applications. Five main interaction dimensions have been identified as the framework used by client organisations to evaluate the quality of B2B interactions. These dimensions are complex constructs that have a double projection in the context of ITT systems and CRM procedures. After providing a general description of these dimensions, the paper proposed a diagnostic procedure for evaluating the perception gaps between the service provider firm and the client organisation, concerning the quality level of each dimension. This diagnostic can be adapted and used by each service provider organisation to identify possible areas of customer dissatisfaction and requirements for future improvements. This study has a number of limitations determined mainly by its exploratory approach. Although the response rates obtained from both service providers and customer organisations are high, the populations of study were fairly limited in number and activity area. On the other hand, the findings regarding the importance allocated to various channels of communications show only a general trend, and do not permit the design and application of a personalised interaction framework with each client organisation. On the basis of these general results, future research projects can adopt a case study approach, to investigate the particular strategies or models implemented by specific service provider organisations to maximise the value of their customer relationships.
References Berry, L., & Parasuraman, A. (1991). Marketing services: Competing through quality. New York: The Free Press.
Brechbühl, H. (2004). Best practices for service organisations. Business Strategy Review, 14(1), 68–70. doi:10.1111/j.0955-6419.2004.00302.x Chen, I. J., & Popovich, K. (2003). Understanding customer relationship management. Business Process Management Journal, 9(5), 672–688. doi:10.1108/14637150310496758 Eckerson, W., & Watson, H. (2000). Harnessing customer information for strategic advantage; Technical challenges and business solutions. Chatsworth: The Data Warehousing Institute. Fickel, L. (1999). Know your customer. CIO Magazine, 12(21), 62–72. Harrison-Walker, L. J., & Neeley, S. E. (2004). Customer relationship building on the Internet in B2B marketing: A proposed typology. Journal of Marketing Theory and Practice, 12(1), 19–35. Hughes, T., Foss, B., Stone, M., & Cheverton, P. (2007). Degrees of separation: Technological interactivity and account management. International Journal of Bank Marketing, 25(5), 315–335. doi:10.1108/02652320710772989 Iyer, A. (2003). Beyond the phone: Benefits of the integrated contact center. Retrieved September 2008, from http://www.crm2day.com/highlights/ EpyEVVAyZFuVTSlcjp.php. Johnson, L. K. (2002). New views on digital CRM. MIT Sloan Management Review, 44(1), 10. Kalakota, R., & Robinson, M. (2001). m-Business: The race to mobility. eAI Journal, December, 4446. Retrieved October 2007, from http://www. bijonline.com/PDF/mBusinessKalakota.pdf Leger, P. (2000). Customer interaction in the digital age: Strategies for improving satisfaction and loyalty. The Utilities Project Vol 1. Retrieved September 2008, from http://www.utilitiesproject. com/documents.asp?grID=86&d_ID=150
1339
The Management of CRM Information Systems in Small B2B Service Organisations
Metallo, G., Cuomo, M. T., & Festa, G. (2007). Relationship management in the business of quality and communication. Total Quality Management, 18(1-2), 119–133.
Simon, G. L. (2005). The case for non-technical client relationship managers in B2B professional services firms. Service Quality Quarterly, 26(4), 1–18.
Olsen, G. (2000). An overview of B2B integration. eAI Journal, May, 28-36. Retrieved September 2008, from http://www.bijonline.com/PDF/ B2BOverview%20-%20Oslen_1.pdf
Smith, K. E. (2000). CRM sizzles in the digital economy. VARBusiness, 16(25), 104–105.
Peppers, D., & Rogers, M. (1999). The one to one manager: Real-world lessons in customer relationship management. New York: Doubleday. Peppers, D., & Rogers, M. (2001). One to one B2B: Customer development strategies for the businessto-business world. New York: Doubleday. Rheault, D., & Sheridan, S. (2002). Reconstruct your business around customers. The Journal of Business Strategy, 23(2), 38–42. doi:10.1108/ eb040236
Stan, S., Evans, K. R., Wood, C. M., & Stinson, J. L. (2007). Segment differences in the asymmetric effects of service quality on business customer relationships. Journal of Services Marketing, 21(5), 358–369. doi:10.1108/08876040710773660 Zeng, Y. E., Wen, H. J., & Yen, D. C. (2003). Customer relationship management (CRM) in business-to-business (B2B) e-commerce. [f]. Information Management & Computer Security, 11(1), 39–44. doi:10.1108/09685220310463722
This work was previously published in Enterprise Information Systems for Business Integration in SMEs: Technological, Organizational, and Social Dimensions, edited by Maria Manuela Cruz-Cunha, pp. 454-467, copyright 2010 by Business Science Reference (an imprint of IGI Global).
1340
1341
Chapter 5.11
Information Technologies as a Vital Channel for an Internal E-Communication Strategy José A. Lastres-Segret University of La Laguna, Spain José M. Núñez-Gorrín University of La Laguna, Tenerife, Spain
INTRODUCTION The use of traditional marketing tools with the purpose to increase effectiveness and efficacy within organizations has been studied among different authors. This topic known as internal marketing (IM) starts defining employees as internal clients to whom the organization must satisfy a group of needs and expectations to achieve their engagement and motivation on work, looking forward to increased productivity and competitively within the organization. Among the most important aspects and maybe the least discussed on IM are the information DOI: 10.4018/978-1-59904-883-3.ch078
technologies (IT), in first place, as a method to ease the work increasing the labor life quality (LLQ). Furthermore, the information technologies are a vital channel to develop an internal e-communication (internal electronic communication) effective strategy. In both cases, the new technologies give enormous possibilities as supporting strategies to IM, allowing the organization to have a direct and permanent relationship with its workers in any place, important aspect to have a successful integration, and participation on the global economy. The present article talks about the challenges and possibilities that offer the IT as IM support, mainly, being an important channel to the internal e-communication.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Information Technologies as a Vital Channel for an Internal E-Communication Strategy
BACKGROUND As Weill, Subramani, and Broadbent (2002) point, one of the most critical decisions that directors have to make today is related to the investment they have to do on IT. These decisions could allow, or impeding business initiatives could be, in fact, a platform to implement future strategies. And this is also true to develop an IM strategy and especially, decisions related to internal communication that use as an IT base, the same as internal communication. To understand the IM model it is useful to have present a definition given by Barney (1991), who points out that this model consists on using marketing tools to develop a human resource, which represents a true value to the company for its capacity on implementing strategies that will make the organization more effective and efficient. Authors as Berry (1981) proposes in this sense, those workers should be seen as internal clients, visualizing the work as an internal product that should satisfy the needs and desires of these internal clients. Quintanilla (1991) says that companies should convert themselves on personalized organizations, in which should be taken as an important fact working satisfaction and self-esteem development of every employee. The IM proposes that satisfied workers could be more productive, and in fact, their organizations more profitable. Relating to the motivation reached by the employees as Levionnois (1992) documented, there are some successful experiences that collect IM benefits and virtues. In a consistent form with the IM model, Peppard (2003) points out also that, on the IT within organizations, the users should be treated as internal clients, to whom should be offered a services portfolio. In these services, we could have available to the workers channels of communication with the company in two ways, company-employees and employees-company. Ahmed and Rafi (2003) established an IM conceptual model, in which they refer to the “4 P’s”
1342
of traditional marketing, it means, product, price, place, and promotion, which are complemented with additional “P’s,” that is service, process, and physical evidence, on described terms by Kotler (2001). Ahmed, Rafiq, and Saad (2003) propose an IM mix that include the following aspects: a) strategical rewards; b) internal communication; c) training and development; d) organizational structure; e) executive leadership; f) physical environment; g) staffing, selection, and promotions; h) interfunctional coordination; i) incentives system; j) empowerment; and k) process and operations changes. However, we could point out that IM is still on development. In this sense, is important to mention Papasolomou-Doukakis’ (2004) opinion, who affirms that IM development has been produced by a mix of ideas and theories that had been set together under the IM umbrella. On this IM development, there is still a lack of unanimity about its definitions and basic principles, which has produced a variety of implementation forms on practice. Regarding internal communication, using the proposal of Frank and Brownell (1989) who define it as communication transactions between individuals or groups in different levels and different areas of specialization within the company, with the intention to design or redesign the organization or coordinate the day by day activities. Ahmed and Rafi (2003) point out the importance of the internal communication indicating that is necessary to establish a link that allows selling the internal clients the values and needed attitudes to reach success on their strategies. Soriano (1993) says that it should be sold and communicate the company’s identity and image, and its corporative values, its plans and projects to develop, its organization, its management model, the possibilities of individual development, the work conditions that it offers, the atmosphere and existing working environment, its products and services, its
Information Technologies as a Vital Channel for an Internal E-Communication Strategy
achievements, success history, and its contribution to the community. Smidts, Pruyn, and Van Riel (2001) say that positive communication environment creates a strong organizational identification, inviting employees to participate actively on discussions about organization topics, getting the employees involved on the making decision process. The employees feel proud of being part of a respected organization and that internal communication is one of the ways to achieve that perception from the employee’s side. Dutton, Dukerrich, and Harquail (1994) indicate that being well informed about the organization elements, like its goals, objectives, and achievements, allows employees to distinguish the upstanding characteristics that make the difference between their organization and others. To expose the organization’s identity is fundamental to achieve the employee’s identification. The members feel proud of being part of an organization that has socially recognized characteristics. Ashforth and Mael (1989) sustain that those employees that strongly identify with their organizations are able to show a supporting attitude toward their own organization. Furthermore, Simon (1997) states that employees that identified with their organizations take decisions that are consistent with the organization’s objectives. Miquel and Marín (2000) consider that inside the product to interchange with the employees should be included ideas, goods, and services that give back the job and all that comes with it, must be communicated. They have pointed out that one of the important aspects of the LLQ is the internal communication itself. Barranco (2000) suggests the need to create information centers dedicated to communicate relevant information about the company to the employees. On a global economy, in which organizations operate in different countries, where employees travel constantly and coexists internally different cultures, is needed channels to achieve an agile and personalized communication without giving
importance to the employees localization. This need created by the global economy encourages the development of e-communication as a permanent way of communication with global reach, with the use of multiple instruments, and as a mechanism that allows personalizing even more the information that is offered to employees. However, despite IM conceptual development, important topics like internal communication strategies development or the use IT, it has not received the importance it should. This is why it is important to discuss the potential and the challenges that IT brings itself, as a base to create an electronic communication platform that will give a start to what we call the internal e-communication. This electronic platform over it is going to be developed the IM has an special importance inside the global economy in which different companies live together, with operations in different countries and integrating groups of work on different localizations, being despite that, necessary to guarantee that the same values are shared and it is pursued the same objectives. The conceptual discussion of internal e-communication advantages on the development of internal marketing and the use of IT is the present article’s central axis.
CHALLENGES OF THE IT ABOUT THE COMMUNICATION WITH THE INTERNAL CLIENTS IN A GLOBAL ECONOMY As an IM support and serving as a channel to the internal e-communication, the IT should be developed not just to offer information to employees, but also to be an efficient way to collect information about the workers. These technologies should allow employees to offer to the organization information about their needs, desires, and personal expectations, and the organization level of accomplishment. The companies should in this sense, have databases that will let them know its internal clients and the segments that could be
1343
Information Technologies as a Vital Channel for an Internal E-Communication Strategy
grouped, with the objective to offer them a job/ product with the most LLQ. The communication between the employees and company should also be made easier at different levels. It is necessary that employees are able to use electronic equipment that is used as a channel to give their opinions, suggestions, and general critics, in which they could modify individually the information about their needs, desires, expectations, and value the LLQ that the company offers. Moreover, workers would point out the elements within their effort to which they can give more dedication and propose the incentives that could be offered to them by the achievement of specific objectives. Tools like Web pages, conventional electronic mail, voice mail messages, and “SMS” messages would be useful channels for the internal e-communication, in both ways, between the company and its employees. In a global economy, a system like mentioned, will permit to get to know the employees of organizations that belong to different places and cultures, which could have important differences on their needs, desires, and expectations. It will also allow, modify, and adjust the product (job) to be offered, so it could be adjusted or adapted better to the employees, taking as important facts the LLQ to the different segments of employees. The internal e-communication is also useful to sell the product/job to the internal clients, promoting organization vision, values, objectives, achievements, and challenges, informing about what the company expects from the employees. In the same way, IT allows workers to receive permanent and relevant information about what happens in the organization. Also, the IT has the challenge on the IM to serve as a point of contact with the potential internal clients, supporting the human resources selection process, informing about the advantages of working on the organization. These electronic channels, in both ways with the internal clients and potential internal clients, should be developed for their easy and intuitive
1344
use. On it design should participated internal clients themselves to allow the communication platform to be in harmony with their needs. Having present that LLQ, as Miguel and Miguel (2002) sustain, could be established as a measure of employee’s liquidness towards his/her job, the stability obtained, and the environment of the work place. Even more, the LLQ incorporates other parameters such as the nicely physical environment, and the making decision process participation, acting independency level, the communication, self security, professional training, and the pride the employees feel towards the company. The IT is a medium that could contribute directly to the LLQ in an important way, making easier the organization personnel work, automatizing rutinary work, and making possible that workers could have more time left to dedicate to creative tasks that are more motivating and interesting, and could be positive on creating value to the company. The internal e-communication based on the IT has an important repercussion on the LLQ, allowing the company to offer to workers more motivating and interesting jobs. Employees that have a major LLQ will be in a greater disposition to offer a greater effort, this means, will pay a major net price (NP), having as a result that the organization will have a major compromise from their employees with the objective to increase their productivity and competitively. Nowadays, organizational communications are mainly made through a hierarchical-organizational way with a top-down direction. It is proposed that the internal e-communication facilitate the communication within the whole organization, creating an effective, agile, and personalized communication in two ways, between the company and its employees in multiple ways, between employees. Figure 1 shows the hierarchical-organizational, comparing it to the internal e-communication net. This communication offers to employees the possibility to show their points of view and communicate their needs. Furthermore, it allows the
Information Technologies as a Vital Channel for an Internal E-Communication Strategy
Figure 1. Hierarchical-organizational communication net (a) versus internal e-communication net
company to get to know its employees, offer them adequate incentives, and to capture ideas within the organization. Also, it facilitates activities coordination within employees and promotes the discussion of ideas that could enhance the organizational innovation, and as a result, the competitiveness. This proposal increases the employees’ power because more actively on the enterprise development. It also gives employees a great responsibility regarding their own communications to the company. Using the internal e-communication, the communication within the companies will be even more continuous and agile; it will count with more channels, more formats, and more possibilities and will contribute to create flatter organizations from the communicational point of view. Within the future, challenges of the internal e-communication is the use of the IT to get to know employees regarding preferences, needs, cultures, families’ characteristics, and interests, with the goal to offer them jobs and work places, incentives, and programs that will adapt better to them. The IT will allow adapting better the communication, facilitating the process to get a more personal communication.
FUTURE TRENDS Organizations will have the opportunity and the obligation, on a more competitive environment,
to amplify the e-communication channels through the use of IT, using the new technological development and obtaining the maximum of the channels on every moment. The internal e-communication has as future challenges the use the new channels that progressively would be at reach, having in mind the fast technological advance in which we live, looking forward to use new ways to accomplish a better communication, more continuous, agile and personalized, in both ways, and multiply directions with internal clients and the organization. It would be equally necessary to make future empirical investigations to know the characteristics that internal e-communication should count with, the frequencies, the relationship between the workers effort, and the electronic devices that could be used for them and their convenience.
CONCLUSION The IT has an important role on the internal marketing development, mainly on the supply of channels for the internal e-communication, in both ways, with the employees. These channels could help to increase workers LLQ, adapting their jobs and activities the best way possible to their needs and desires. Equally, the IT has an effect on the LLQ, allowing offering employees jobs that are more motivating and interesting.
1345
Information Technologies as a Vital Channel for an Internal E-Communication Strategy
It could be confirmed that the IT is going to convert itself on a useful tool to the internal marketing implementation, to ease the interconnection between the company and its personnel in any place of the world. At the same time, it contributes making the world more flat, as Friedmann (2006) affirms, supporting the organization’s role to assume global economy challenges. The e-communication adds the electronic management of human resources, the capacity to communicate with employees considering their culture, interests, and personal motivations. Raising employee’s participation allowing them to share their ideas will help to increase innovation and activities coordination. Furthermore, it allows offering jobs, environments, work places, and incentives programs more suitable to every employee. To reach the internal e-communication implementation, support is required from the executives regarding resources and priorities assignment, but as Morgan (2004) points out, the executives recognize the IT worth to the organizational effectiveness, but a few understand the role and potential contribution to increase the organization value. Because of this reason, to implement internal e-communication is needed, that executives understand the potential that offers it within organizations, which could be a future investigation area.
Ahmed, P., & Rafiq, M. (2003). Internal marketing. New York: McGraw-Hill/Irwin. Ahmed, P., Rafiq, M., & Saad, N. (2003). Internal marketing and mediating role of organizational competencies. European Journal of Marketing, 37(9), 1221–1240. doi:10.1108/03090560310486960 Ashforth, B. E., & Mael, F. A. (1989). Social identity and organization. Academy of Management Review, 14, 20–39. doi:10.2307/258189
1346
Barney, J. (1991). Firm resources and sustained competitive advantage. Journal of Management, 17(1), 99–120. doi:10.1177/014920639101700108 Barranco, S. F. (2000). Marketing interno y gestión de recursos humanos. Madrid: Editorial Pirámide. Berry, L. (1981, March). The employee as customer. Journal of Retail Banking, 3, 25–28. Dutton, J. E., Dukerrich, J. M., & Harquail, C. V. (1994). Organizational images and member identification. Administrative Science Quarterly, 39, 239–263. doi:10.2307/2393235 Frank, A., & Brownell, J. (1989). Organizational communication and behavior: Communicating to improve performance. Orlando, FL: Holt, Rinehart & Winston. Friedman, T. L. (2006). La Tierra es plana. Breve historia del mundo globalizado del siglo XXI. Barcelona: Editorial Martínez Roca, SA. Kotler, P. (2000). Dirección de marketing. Madrid: Prentice Hall. Levionnois, M. (1992). Marketing interno y gestión de recursos humanos. Madrid: Editorial Díaz Santos. Miguel, A., & Miguel, I. (2002). Calidad de vida laboral y organización en el trabajo. Madrid: Ministerio de Trabajo y Asuntos Sociales. Miquel, S., & Marín, C. (2000). Marketing interno, objeto, instrumentos funcionales y planificación (Working paper No. 100). Valencia, Spain: Universidad de Valencia. Morgan, R. (2004). Business agility and internal marketing. European Business Review, 16(5), 464–472. doi:10.1108/09555340410699811 Papasolomou-Doukakis, I. (2004). Internal marketing in UK bank: Conceptual legitimacy or window dressing? The International Journal of Bank, 22(6), 421–452. doi:10.1108/02652320410559349
Information Technologies as a Vital Channel for an Internal E-Communication Strategy
Peppard, J. (2003). Managing IM as a portfolio of services. European Management Journal, 21(4), 467–483. doi:10.1016/S0263-2373(03)00074-4 Quintanilla Pardo, I. (1991). Recursos humanos y marketing interno. Madrid: Editorial Pirámide. REFERENCES Simon, H. A. (1997). Administrative behavior. A study of decision-making processes in administrative organization. New York: Free Press. Smidts, A., Pruyn, A., & Van Riel, C. (2001). The impact of employee communication and perceived external prestige on organization identification. Academy of Management Journal, 49(5), 1051–1062. doi:10.2307/3069448 Soriano, C. (1993). Las tres dimensiones del marketing de servicios. Madrid: Ediciones Díaz Santos. Weill, P., Subramani, M., & Broadbent, M. (2002, Fall). Building IT infrastructure for strategic agility. MIT Sloan School of Management, 57-65.
KEY TERMS AND DEFINITIONS Effort of the Workers: About the effort the workers do, it is sized in variable terms such as time dedicated to work, pro-activity level to give ideas, level of support to new projects, level of effort to updating on technical and directive areas, the acceptance of new rules, policies and strategies, level of acceptance to branches mobility, city or charge, and days of the week that would be able to travel out of the city. Internal Communication: Communication between the company and its workers as a way of implementing an internal marketing program. Internal E-Communication in Both Routes: The communication between the company and its workers in both ways, allowing the company to communicate its mission, values, objectives, and
allow workers to communicate their needs, desires, and expectations, as well as their evaluation to the company regarding the accomplishment of their needs. It permits the company to adapt the job it has offered to the workers and their needs. Internal E-Communications in a Global Economy World: Communication between the company and its workers through the use of new technology, achieving a permanent contact and communication, opportune independently of the workers geographical location or company operative areas. Labor Life Quality (LLQ): Equivalent to the product/service quality on the traditional marketing. It indicates the quality of the work (product) that is offered to workers and it is sized by aspects such as work environment, if the worker likes the activity that he/she does, and schedule, among others. Net Price (NP): The equivalent to the traditional marketing price on the internal marketing. The net price is a price that would be paid by the workers because of receiving a job (product) that offers a determined LLQ. The net price is based on two elements, the effort that should be given by the workers and the wedge they receive. At the same level of LLQ, workers would be in a disposition to increase their level of effort if the wedge also increases, maintaining constant the NP. The more LLQ, the company could increase the NP in an analogous way as it occurs on the traditional marketing with the quality-price relationship. Product in Internal Marketing: The job understood in a global way that is offered to workers and consists of elements that should be sold to workers, company image and identity, corporative values, its plans and development projects, its organization, its way of management, ideas, goods, and services it supplies, possibilities of individual development, professional development, work conditions, work atmosphere and environment, products and services, its achievements, success history, and its community contribution.
1347
Information Technologies as a Vital Channel for an Internal E-Communication Strategy
Promotion in Internal Marketing: Programs and incentives that the company offers to its workers to obtain a major level of effort to achieve a specific objective.
This work was previously published in Encyclopedia of Human Resources Information Systems: Challenges in e-HRM, edited by Teresa Torres-Coronas and Mario Arias-Oliva, pp. 532-537, copyright 2009 by Information Science Reference (an imprint of IGI Global).
1348
1349
Chapter 5.12
Early User Involvement and Participation in Employee SelfService Application Deployment: Theory and Evidence from Four Dutch Governmental Cases Gerwin Koopman Syntess Software, The Netherlands Ronald Batenburg Utrecht University, The Netherlands
Abstract This chapter theoretically and empirically addresses the notion that user participation and involvement is one of the important factors for IS success. Different models and studies are reviewed to define and classify types of early end-user involvement and participation. Next, five case studies are presented of Dutch governmental organizations (Ministries) that have recently deployed an employee self-service application. Based on interviews with developers, project managers and users it can be showed that the deployment success of such systems is positively related to the extent of early user involvement and participation. In addition, it was found that expectancy management is important to keep users DOI: 10.4018/978-1-60566-304-3.ch004
informed about certain deployment decisions. In this way, employees can truly use the self-service applications without much support from the HRdepartments.
INTRODUCTION In 2007, the Dutch House of Representatives asked the Dutch Government questions about their ICT-expenditures. Concerns were raised about how much money was wasted by governmental ICT projects that resulted in failures. The Dutch Court of Audit was instructed to come up with a report on governmental ICT projects and the possible reasons for failures. When the report (Dutch Court Audit, 2007) was finished, it named several difficulties that can be faced when executing ICT
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
projects for governmental organisations. Among this list is the impact of the changes caused by the implementation of the IT-system. Users current way of working may be completely changed by changing work processes when the system is introduced. Users therefore need to be informed an trained to completely benefit from the system. Another cause for problems is the need for clear goals and demands. If the software developer does not receive clear demands and wishes, the actual end-product might not be what the government thought it would receive. In both of the mentioned problems users play an important role in making the system a success. There is already a lot of agreement on the fact that users should be involved to produce usable software programs. It is recommended in ISO standard 13407 to get better insights in the requirements for a software application. Most attention to user involvement is still on the usability testing of systems, which happens on a later stage in the development process. However, the sooner the end-user is involved, the more efficient it is (Noyes et al, 1996; Chatzoglou & Macaulay, 1996; Blackburn et al, 2000). One of the challenges in involving users in IT developments is the time factor that plays a very important role in governmental IT projects. Most of the decisions to implement or develop new Information Technology have a political background. This means the project will have to be delivered at the end of the current cabinet’s term. This introduces a certain pressure for the project to be delivered as soon as possible. This conflicts with the idea that user involvement will take a serious amount of extra time needed in the development of a new system (Grudin, 1991). The systems that have the specific attention of this research on first sight also seem to conflict with this additional time needed in IT projects when involving users. Main reasons for implementing E-HRM systems and Shared Service Centres (SSC) are increasing efficiency and productivity (Verheijen, 2007; Janssen & Joha, 2006). For
1350
the current Dutch cabinet this is very important because it wants to decrease the number of civil servants with a number of 12,800 to achieve a cost cutback of 630 million Euros in four years (Ministerie van Binnenlandse Zaken en Koninkrijksrelaties, 2007). Another difficulty in the involvement of users is the selection of the right groups of employees to have participating in the project (Grudin, 1991). This is especially true for most governmental organisations, because they employ a large amount of civil servants. As the applications that are the subject of the research are mainly aimed at selfservice these are all potential end-users. This is a very diverse group and it can be considered a challenge to make the right selection of users from this total population. In this chapter we address the question which methods are currently used within the Dutch governmental institutions to involve end-users when deploying employee self-service applications. We also investigate the relationship between end-user participation and involvement and the success of such e-HRM applications. Four Dutch governmental organizations (Ministries) are investigated that have implementing employee self-service applications (i.e. e-HRM), which offers the possibility to compare the different development approaches that are followed. Semi-structured and topic interviews were held with stakeholders within the Ministries to explore which methods are already used to involve users in the process of deployment. Their experiences are described and reflected upon at the closing section of the chapter.
THEORY A Review on the Role of User Participation DeLone & McLean (2003) evaluated empirical testing and validation of their original D&M IS Success Model (DeLone & McLean, 1992)
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
by other researchers and developed an updated IS Success Model. An adaptation of this model is depicted in Figure 1. It is based on the three different levels of communication that were already defined by Shannon & Weaver (1949). The technical level is concerned with the physical information output of a system, and evaluated on accuracy and efficiency. The semantic level deals with the meaning of the output of a system, and specifically with how well the output conveys the intended meaning. The effectiveness level or influence level (Mason, 1978) concerns the effect of the information on the receiver. The top layer of the model contains three different quality dimensions. The two dimensions at the left hand side of this layer are similar as in the original D&M model (DeLone & McLean, 1992).Terms to measure information quality were accuracy, timeliness, completeness, relevance and consistency. This concept and the terms thus measure success on the semantic level. For system quality these terms were ease-of-use, functionality, reliability, flexibility, data quality, portability, integration and performance. These measures are related tot the technical success level. The concept of service quality was added, because of the changing role or IS organisations, they more and more deliver services instead of only products. The use concept in the model measures success on the effectiveness level. It is considered an important indicator for system success, especially Figure 1. Adapted from D&M (Updated) IS success model (DeLone & McLean, 2003)
when it is “informed and effective” (DeLone & McLean, 2003). There are however some difficulties in the interpretation of the concept use. It has a lot of different aspects, for instance voluntariness and effectiveness. It therefore received critique in research of the original model. One example is that use is not a success variable in the case of mandatory systems. DeLone & McLean (2003) argue however, that no use is “totally mandatory” and that mandatory systems will be discontinued if they do not deliver the expected results. The differences are thus on the different levels: mandatory to use for employees, but voluntarily to terminate for management. Nevertheless, to overcome some of the difficulties, in the updated model a distinction is made between intention to use (attitude) and use (behaviour). User satisfaction is considered a useful measure in evaluating the success of information systems (Ives, Olson, & Baroudi, 1983). In most cases this subjective judgment by users is considered to be more user practical, because of the difficulties in measuring the success of IS objectively (Saarinen, 1996; Lin & Shao, 2000). The net benefits concept is regarded as a set of IS impact measures, for instance work group impacts and organisational impacts. More specific measures are for instance quality of work, job performance and quality of work environment. To reduce the complexity of the model these are all grouped together, and which impact will be chosen will depend on the information system that is evaluated. These two concepts (user satisfaction and net benefits) measure success on the effectiveness level. Lin & Shao (2000) investigated the relationship between user participation and system success. They found a significant relationship between both concepts, but warn that the context should be taken into account. Both user participation and system success can be directly and indirectly influenced by other factors. Based on the outcomes of their data analysis they also suggest that “getting users involved in the development process may improve their attitudes toward the system and enhance the
1351
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
importance and relevance users perceive about the system”. Other findings were the positive influence of user attitudes on user involvement and the fact that users are asked more to participate when the complexity of systems increases. In a survey of 200 production managers Baroudi et al, (1986) found positive correlations between user involvement and user information satisfaction and system usage. User involvement in this case was conceptualised as activities during the development that enabled the users to influence the development. Although, this is more towards user participation, it does not completely distinct the behaviour and psychological concepts. Interviewing users and developers from 151 projects, McKeen & Guimaraes (1997) found a positive and significant relationship between user participation and user satisfaction. They also noted that projects concerning systems or tasks with a high complexity called for more user participation. To positively influence the success of new software applications, developers often turn to user involvement. According to Kujala (2003) user involvement can be seen as a general term and refers to Damodaran (1996) who suggests a continuum from informative to consultative to participative. However, some researchers suggest a difference between user involvement and user participation. User participation will then be the “assignments, activities, and behaviors that users or their representatives perform during the systems development process” (Barki & Hartwick, 1989). User involvement can be regarded as “a subjective psychological state reflecting the importance and personal relevance that a user attaches to a given system” (Barki & Hartwick, 1989). McKeen et al, (1994) in their research on contingency factors also describe the development of the division of these two concepts. The refined model is even further extended, in that user the relationship between user participation and system success is influenced by other moderating variables like the degree of influence, attitude, communication and type of involvement. In this study therefore the distinction
1352
between the two concepts user participation and user involvement will also be used. Several researchers do not only recognize the link between user participation and system success, but even stress the importance of involving end-users in the development of software. Kensing & Blomberg (1998) state the participation of end-users is “seen as one of the preconditions for good design” (see also Shapiro, 2005). The workers have the information about the working environment and organisation, which designers logically do not always possess. Combining the domain knowledge of the workers and the technical knowledge of the designers is considered a foundation for the development of a useful application (Kensing & Blomberg, 1998).Reviewing several studies on user involvement Damodaran (1996) identified a number of benefits of participating end-users: •)>>
•)>>
More accurate user requirements: Numerous problems or defects in software applications can be traced back to poorly capturing requirements at the beginning of the development process (Borland, 2006). Pekkola et al. (2006) also argue one of the reasons for information system development projects to fail are incomplete requirements. In their studies they found user participation useful in gathering “credible, trustworthy and realistic descriptions of requirements”. In turn these accurate user requirements result in an improved system quality (Kujala, 2003). Avoidance of unnecessary or unusable costly system features: Two of the usability guidelines given by Nielsen (1993) are “Designers are not users” and “Less is more”. Designers might think of certain features to incorporate in the application without consulting end-users. Functionalities that are completely logical for developers, might be completely incomprehensible for users. This might
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
•)>>
•)>>
•)>>
result in users having to spent too much time on learning how to use these functionalities or even not using them. Designers might also have the tendency of incorporating too many options to satisfy ‘every’ end-user. Besides the fact that users might never know of these options and use them, they can also work contra productive by overwhelming to users. A lot of time and effort for developing these features can be saved by participating end-users. Improved levels of system acceptance: The levels of system acceptance can be in positively influenced by user involvement in several ways. Among the list Ives & Olsen (1984) have found in other literature are for instance the development of development of realistic expectations about the information system (Gibson, 1977). Also decreasing user resistance against the new system and actually creating commitment for it are other results of user participation (Lucas, 1974). Cherry & Macredie (1999) state participatory design as a means to overcome the acceptability problems systems might encounter without the participation of users. Better understanding of the system by the user: Logically during the participation users will learn about the system by experiencing the development (Lucas, 1974). This familiarisation also leads to the increase in chances users will come up with suggestions during the development, because they will feel more confident (Robey & Farrow, 1982). In the end this greater understanding should lead to a more effective use of the application. Increased participation in decisionmaking within the organisation:Clement & Van den Besselaar (1993) point to the fact that participation is not only restricted to the design of an IT-system. The application will probably change the way tasks are
executed, thus affecting the entire organisation and by participating employees have the possibility to influence this (Robey & Farrow, 1982). It might thus not be restricted to the design of the application, but also to other decision-making processes within the organisation. Although involving users is considered to be useful, it also introduces a number of difficulties. Firstly, a large amount of a user’s knowledge about the process or task the software application will have to support has become tacit (Wood, 1997). It might therefore be hard to get information from these users about the way they work. An example of this was also visible at the Ministry of the Interior. Developers built a certain functionality based on the description of how a task was executed by employees without an application. After implementation however, it became clear in the former way of working an extra file was created by users to keep track of the status of the tasks at hand. Since this was not formally part of the process, they forgot to mention this to the developers. It was thus not incorporated in the new application, while this would have been relatively easy to realise. To overcome this kind of problems it is possible to perform field studies, which have the advantage that users do not have to articulate their needs (Kujala, 2003). Other researchers also suggest the use of (paper) prototypes to counter the difficulties users might have in articulating their needs (Pekkol aet al, 2006; Nielsen, 1993). Users can also be reluctant to have developers observing them while they work (Butler, 1996). They might express concerns about justifying the time they would have to spent with the design team or disturbing their co-workers. Solutions to this problem are getting commitment from management (Grudin, 1991) and having sessions in separate rooms so no colleagues would have to feel bothered. Besides these problems Butler (1996) mentions the fact that these sessions are
1353
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
considered to consume a lot of time, as well in planning them, as in executing them. Several researchers also point out the fact that involving users most of the time delivers a large amount of raw data that is difficult to analyse and to use in decision making (Brown, 1996; Rowley, 1996). This will make projects where users participate more time-consuming and thus something development teams want to cut back on. Grudin (1991) also noted the judgement of developers that user involvement would take too much time. However, as already stated, allocating more time upfront will result in a faster cycle time for software developers (Blackburn et al, 2000). Some members of the design team might simply not have the abilities needed to communicate efficiently with users (Grudin, 1991). They might find it difficult to understand the work situations of users or miss the empathy needed when communicating with users that do not possess the computer skills they have themselves. As a solution to the problematic communication between users and developers, mediators could be brought into action (Pekkola, Niina, & Pasi, 2006). They can act as a bridge between both groups, translating the different concepts from one group to the other. Mock-ups and prototypes from the design team are for instance discussed with users, while user input and feedback is given to the design team. Developers can then focus on the design and implementation of the application instead of having to spent time and effort on user participation methods. A challenge that occurs even before all of these mentioned is the selection of user representatives and obtaining access to them (Grudin, 1991). Even when a application is developed specifically for one organisation, developers might fear the risk of missing a certain user(group) in their selection. A possible solution is to define a few personas based on intended users. A persona is defined as “an archetype of a user that is given a name and a face, and it is carefully described in terms of needs, goals and tasks” (Blomquist & Arvola,
1354
2002). This can be useful in organisations that have large groups of users, which makes it tricky to randomly take a small selection out of the total group. Subsequently getting hold of the ‘selected’ end-users might also pose some difficulties. There might be several barriers like information managers acting as user representatives, but who do not resemble the actual end-user. Also the physical distance between developers and users might create problems. One of the solutions is to; if possible, have the development team working on location of the customer. This way easy access to users is possible (planned or ad hoc).
Early Involvement and Participation of Users The reasons of having user participation are clearly visible, but when should end-users engaged in the development process? Several researchers suggests that users should be involved early in the process. For instance, if users are used as sources in the requirements capturing process, the number of iterations are less than if they are not (Chatzoglou & Macaulay, 1996). Also capturing usability problems early in the process is very rewarding. Mantei & Teorey (1988) estimate that correcting problems early in the development process cost three times less than correcting them later on. Nielsen (1993) also supports the involvement of users just after the start of the design phase. Regular meetings between users and designers could for instance prevent a mismatch the users’ actual task and the developers’ model of the task. In comparing software development firms Blackburn et al. (2000) found that the ones that were considered to have a faster cycle time, were the ones that spent more time on for instance getting customer requirements at the early stages of the project. In their follow-up interviews of their quantitative data analysis, managers mentioned that much time in projects is consumed by rework. To reduce this time it is important to capture the needs of the users early in the development, so
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
before the actual programming has started. In the end this will actually improve the speed and productivity of the software developer. Damodaran (1996) underlines the justification of early user involvement by pointing to one of the principles of a number of social design approaches. That is, organisations will just postpone the detection of problems if there is no effective user involvement. Again, problems that have to be solved later on in the development, or even after implementation, will result in higher costs. User participation can take on a number of forms in the development of a software product. Kujala (2003) suggests four main approaches are detectible, which are user-centred design, participatory design, ethnography and contextual design. Since involving end-users from the beginning of the project is considered very beneficiary, the focus will be on those approaches and methods that take place early in the development process. Gould & Lewis (1985) in their research on user-centred design recommend the early focus on users and direct contact between development team and end-users. This implies doing interviews and discussions with end-users, even before any design has been made. Also people should be observed when performing tasks, as well in the present situation as with prototypes that are developed during the project. Also the design should be iterative, this could for instance be realised by using prototypes that can be reviewed by users. Participatory design is considered to be a design philosophy instead of a methodology (Cherry & Macredie, 1999). It is not prescriptive and therefore the set of techniques that could be used should be considered open-ended. The approach does have some identifiable principles however, firstly it aims at the production of information systems that improve the work environment. Secondly, users should be actively involved at each stage of the development and finally the development should be under constant review (iterative design). Cherry & Macredie (1999) also mention four important techniques, cooperative
prototyping being the main technique. The other techniques are brainstorming, workshops and organisational gaming. Ethnography consists of observing and describing the activities of a group, in an attempt to understand these activities (Littlejohn, 2002). In the design of information systems it is defined as developing “a thorough understanding of current work practices as a basis for the design of computer support” (Simonsen & Kensing, 1997). The reason for this is the occurrence of differences in what users say they do, and what they actually do (Nielsen, 1993). The approach is descriptive of nature, is from a member’s point-of-view, takes place in natural settings and behaviours should be explained from their context (Blomberg et al, 1993). A typical method of ethnography is observing end-users while they perform their daily work. This can be done following them in their work, so designers being present at the office, or recording the tasks on video and then analysing this footage later on. Similar to ethnography is contextual design. It goal is to help a cross-functional team to agree on what users need and design a system for them (Beyer & Holtzblatt, 1999). The approach focuses on the improvement of the current way of working within an organisation. It thus is not only limited to the design of a system, but also incorporates redesigning the work processes. Users are the main source for data to support decisions on what developments should take place. Specific methods to obtain information from users are (paper) prototyping and contextual inquiry. The latter method is a combination of observing users and interviewing them at the same moment (Beyer & Holtzblatt, 1999). Co-development, ethnographic methods and contextual inquiry are participatory methods that are located early in the development cycle (Muller, 2001). Most of the approaches actually span the entire development. Table 1 summarises this section and lists the techniques that could be used in the early stages of the development.
1355
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
Table 1. POTENTIAL early participation methods Method
Approach (Kujala, 2003)
Observation
User-centred Design / Ethnography
Interviews
User-centred Design
Discussion
User-centred Design
Prototyping
User-centred Design / Participatory Design / Contextual Design
Brainstorming
Participatory Design
Workshops
Participatory Design
Organisational gaming
Participatory Design
Video analysis
Ethnography
Contextual Inquiry
Contextual Design
THE CASE: EMPLOYEE SELFSERVICE APPLICATIONS As stated in the previous section the type of system and the contextual environment are important factors to keep in mind when measuring IS success. In this paper we focus on Employee SelfService (ESS) systems that represent one of the fast developing trends in the domain of e-HRM (Strohmeier, 2007; Ruël et al, 2004). This type of systems is specifically relevant for this study as it directly relates to the issue of user participation, as it aims to empower employees within organizations.ESS is defined by Konradt et al. (2006) as “corporate web portal that enables managers and employees to view, create and maintain relevant personnel information”. Konradt et al. also identify four different basis channel functions the ESS can support: •)>> •)>> •)>> •)>>
1356
informing employees about rules and regulations providing interaction the access to personal information supporting transactions, like applications for leave delivering for instance payslips or training videos
All of the above tasks are normally done by the organisations’ HR departments. Fister Gale (2003), in her study on three successful ESS implementations, describes reducing the workload of these personnel departments is a major reason for implementation. For instance, changing personal information of employees in often several databases normally had to be done by HR employees. This can now be done by employees themselves by filling in web based forms, resulting in (real-time) updates of the databases of the HR systems. The web based nature of the ESS also offer the possibility to significantly decrease the paperwork that needs to be handled. However, the benefits are not only on the organisations’ side, employees also profit from the implementation of ESS. They have instant access to information and the effort needed for certain transactions, like expense claims, is reduced. Managers also benefit from the up-to-date information and easy access to for instance reports, resulting in a better overview over their resources.
ESS and User Satisfaction Konradt et al. (2006) used the well-known Technology Acceptance Model (TAM; Davis, 1989) to describe the influences of a systems’ usefulness and ease of use on user satisfaction and system use. The research model they used in their investigation is depicted in Figure 2. Ease of use related positively to user satisfaction, as well as to usefulness. Usefulness in turn positively influenced both system use and user satisfaction. A final relationship was described between user satisfaction and system use. A number of implications were drawn from these findings to ensure the success of an ESS implementation. The suggestion that system acceptance is mainly determined by the usefulness of the system and its ease of use, implies that enough attention should be paid to these factors. Informing and involving employees during the development is advised to influence the ease of
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
Figure 2. User participation and application success
use and usefulness of the application. It should be clear to employees why it is beneficiary to them to use the ESS, to ensure system acceptance. If users do not accept the system, the workload reduction for HR department will not be realised. Instead of the normal workload, HR employees will be flooded with help requests by users who do not understand the system or even are reluctant to work with it.
Data and Methods To determine in what ways users are involved or enabled to participate in the development of software applications, interviews were held at four Dutch ministries. The cases are described below.
Emplaza at the Ministry of the Interior and Kingdom Relationships The person interviewed representing the Ministry of the Interior and Kingdom Relationships is the project leader Self-Service / Emplaza. The application is called Emplaza, a combination of the words Employability and Plaza. This self-service human resources application is used by approximately 5,500 civil servants within the Ministry of the Interior and Kingdom
Relationships. The application is also used by the Ministry of Agriculture, Nature and Food Quality and the Ministry of Economic Affairs, resulting in a total number of about 17,000 users. The software supports up to twenty HR-processes, for instance applying for leave or filing an appraisal conversation. The application is actually a sort of web application and functions as layer over the actual administrative IT-system. It is built and managed by an external party. At the time of the interview a new release of the application (version 4.3) was under development. This will be the base for this case description. Since the application is not entirely new some of the reactions of the users can be expected based on experiences from the previous releases. These experiences also influenced the way in which new releases or features are developed. This time however, the release has taken more than a year to develop because of some important differences with previous situations. First of all the builders were new to the project and therefore the advantage of having worked together (as with previous releases) was lost. Second, release 4.3 can be considered larger and more extended in words of number of functionalities. As a result testing the application a considerable amount of extra time was needed to test this version. Finally, the change in organizational structure with the introduction of
1357
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
P-Direkt (see next section) also took some time to get used to. P-Direkt for instance now takes care of communication between the external builder and the user group of the Ministry. For the development of the new releases keyusers or super-users were selected to participate. These civil servants have a lot of knowledge about the process the application is supposed to support. By interviewing them they current way the process is executed is determined. A next step was to establish which forms should be available to support task within the process. After that the next task was to find out how the forms and workflow should look like in Emplaza. When agreement was found on these issues the Functional Designs were created by the software developer. Before the actual programming started, a number of applications that supported similar HR-processes were investigated. Findings from this analysis formed the starting point of how this should be realized in the Emplaza application. The key-users are thus very involved in the business rules that need to be implemented in the system. Other aspects they are asked to judge, are the look-and-feel of the user interface and the performance of the application. To do this they have to use test-scripts that will force them through every step and part of the new functionality so they will be able to comment on all the new developments. Members of the HR self-service project team also test the application by looking at it from the viewpoint of a ‘new’ user. They specifically pay attention to the help texts that are created for the end-users to guide them through certain tasks. A number of criteria were used in selecting employees to participate in the development of the new release. Participants had to have a lot of knowledge and experience in the field concerning the process at hand. Furthermore, they had to be available to cooperate, i.e. they had to be freed from their normal tasks. Finally, they also had to be able to think constructively about the new functionality. Most of the time it had become clear in earlier sessions whether or
1358
not people met this latter criterion. For testing the application managers are asked to cooperate, they are selected on their position within the organization and thus all have a different role in the HR-process that is going to be implemented in the system. It is tried to have two ‘camps’: those who are sceptical of ICT and those who feel positive about ICT. End-users are actually involved just when the new release has gone ‘live’ i.e. has gone into production. Complaints and issues that come up during the use of the application are gathered and reviewed. These form the foundation for the change proposals that are discussed on a interdepartmental level. During these discussions decisions are taken on which changes really need to be implemented. If end-users are involved earlier after all, most of them come from the central apparatus of the Ministry. The reason for this is that they are located close to the test location. They are chosen as randomly as possible, so no real criteria are used to select participants. This way the development group hopes to get ‘fresh’ insights about the application.
P-Direkt to be Used Throughout Several Dutch Ministries In July 2003 the Dutch cabinet chose to start the establishment of a Shared Service Centre (SSC) called P-Direkt. This should be a Human Resource Management SSC for Personnel registration and Salary administration. Although the project had some major problems, it is now still in progress with the same main goal. It should lead to a more efficient HR-column of the government (Ministry of Internal Affairs, 2006). Two identified conditions to reach this goal are joining administrative HR-tasks and the implementation of digital self-service. The latter of these recognised conditions makes P-Direkt an interesting subject to examine and see how user involvement or participation are applied in this project.
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
The respondent for this interview is the Test manager at P-Direkt and is responsible for the Functional Acceptation Tests and User Acceptation Tests of the HR-Portal that is currently developed. This self-service HR-portal should eventually be used throughout the entire government. In contrast to the Emplaza 4.3 release this is an entirely new application. It is built using mostly standard functionalities of SAP but, if necessary, customization is also applied. In the process of developing the self-service application users are involved in different ways and at different stages. Right from the start several workgroups are formed. These consist of civil servants from several ministries that in the end will use the application. The members of these groups could be considered end-users, as they will eventually use the application in their normal work. However, they have a lot of knowledge about the HR-processes the application should support. These workgroups are full-time dedicated in the development of the application for a longer period of time. One workgroup for instance has been involved from the start in simplifying and standardizing the HR-processes within the Dutch Government. After twenty-four processes had been defined they formed the basis to build the technical system that should support them. The workgroups were then involved in incorporating the right business rules within this system. An example of such a rule is calculating the maximum compensation that should be granted in different situations. At the final part of building the application a number of end-users are asked to test the application. This group of end-users do have knowledge about the processes that should be supported, however they were not earlier involved in the development of the application. The involved departments are asked to send one or two employees to take part in the User Acceptation Tests. Per session seven to ten participants are asked to complete the scenarios that are designed to guide them through a certain task. These tasks,
for instance filing an expense claim, are subdivided in different steps. This way users cannot only comment on the application in general but also notate findings about specific steps in the process. In this way the scenarios also contribute to being able to easily group comments about certain steps in the process or specific parts from all the different test users. These grouped and summarized comments and findings are then discussed by P-Direkt and the builder of the application. During these discussions it is decided which findings need fixes and those are then built within two to three days. After that a new test session is held to examine whether or not the problems were sufficiently solved.
P-Loket at the Ministry of Health, Welfare and Sport The Ministry of Health, Welfare and Sport uses an application which is very similar to Emplaza. It is called P-Loket and was also developed for the Ministry of Social Affairs & Employment and the General Intelligence & Security Service, because they use the same payroll application. P-Loket is a web application, and functions as a layer on top of this payroll application (PersonnelView, or P-View). This situation is thus very comparable to the one at the Ministry of the Interior. P-Loket is a totally new developed application, which can be used by employees to support them in (personnel) tasks like for instance the filing of a request for leave. In June 2007 around 12 different forms are supported by the application, which can be used by approximately 2,250 civil servants. These numbers should grow to about 18 forms and 5,000 employees by January 2008. The forms and processes that should be supported were chosen based on the outcomes of the standardisation workgroup of the P-Direkt project. The P-Direkt project is also the reason that after the 18 forms are finished no further developments will be done. The P-Direkt application will eventually substitute P-Loket.
1359
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
From the start of the development of the application (in 2006) it was already clear P-Direkt would be the governmental HR self-service application. However, for reasons of more rapidly realisable efficiency benefits and to get used to self-service applications, it was decided to still start the development of P-Loket. The first quarter of 2006 was used as preparation and to come up with a plan of how to approach the project. The second quarter of the year was used to prepare for the building, make a process design and setting up authorisations. By the end of June the actual creation of the application could start. Building the application was done by an external software company that also created the P-View application. P-Loket was also a totally new developed application for them. However, P-View and P-Loket are quite similar web applications, which had some advantages. The links for instance that had to be available, were already more or less present and thus had not to be created completely from scratch. They had one developer working full-time on the project. Employees of the Ministry were involved in several ways during the development. The project group that was formed at the start consisted of hremployees, members of the audit service and two employees of the Information & Communication department. The latter two were experts on web (applications) and usability. Both these experts had the task to look at the application from a user perspective. The usability expert for instance discussed a number of prototypes (on screen) with the builder. By asking questions like “what will happen if a user clicks this button?” issues could already be addressed before anything was programmed. Apart from the experts the project group members did not attend to the User Interface of the application. Their focal point was on the business rules that should be implemented. Next to the fact that users were represented in the project group, other civil servants were asked to cooperate in an usability test. This test was carried out by a third party and the main reason was
1360
to resolve usability issues the software builder and the usability expert from the project could not agree on. The test was carried out with one test person and a guide in one room, observers were in another room to take notes and film the session with a camera. In selecting employees to take part in this test, the project group tried to have a balance in computer skills, male/female ratio and office/ field staff ratio. To find eight participants, contact persons were asked if they knew employees that fitted the necessary characteristics. Another way to involve end-users was to have sessions with managers to discuss the functionality that supports performance review conversations with them. Per session the application was demonstrated to three to twelve managers. It took roughly five weeks to complete the sessions with two hundred and fifty managers. Managers could immediately deliver feedback in the form of questions or remarks during the demonstration. Although this way of involving end-users took considerable time and effort, it was considered to be very useful and contributing to the acceptance of the application. One of the strengths of having different sessions was that certain issues came up in numerous occasions. This made it easier to establish the importance of a problem or request. The issues from the different sessions were combined and for each issue the urgency was determined. Subsequently the impact of solutions for these issues was discussed with software developer. Also the fact that the project group was located in the same offices as end-users that were not in the project group offered the possibility to ask these colleagues for their opinions in an informal ad hoc way. The project group gratefully made use of this opportunity during the development of P-Loket.
PeopleSoft/HR at the Ministry Of Defence The Ministry of Defence started implementing self-service on HR-processes in 2004, but without involving end-users. As a result the users started
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
having wrong interpretations about the application. Therefore the Ministry started with improving the self-service parts of the application in 2006. The application is based on the PeopleSoft HR-system and the first processes to be supported were looking into personal data, filing requests for leave and filing requests for foreign official tours. Approximately 80,000 users make use of the software, of which about 65,000 are permanent staff of the Ministry. Besides this large group of users another point of consideration is the sometimes disrupted relationship with the formal superior. This is due to frequent shifts within the organisation, for instance staff being posted abroad for military operations. As a result the application should offer the possibility to delegate certain tasks to other superiors, planners and/or secretaries.It took about one and a half years from the beginning of the project until the improved application went live. As a start it was determined which people and processes should be supported. Subsequently the possibilities of the (then) current application were investigated. There were three important points of departure: •)>> •)>> •)>>
Outcomes of using should be visible to the user Employees use the application in good faith (“the user does nothing wrong”) No training should be necessary to use the application.
The main idea behind this is that the development should not only be seen as supporting a processes by an application, but also supporting users in their actions when using the application. Usability research was done by someone from outside the Ministry who had no knowledge of HR-processes or PeopleSoft. This person asked the civil servants how the current application was used. Some consultancy was done by external parties, but it felt most of the work was done by the internal organisation to come up with the advises and reports.Besides the usability research, users
were also involved in other ways. Employees with reasonable knowledge and skills about IT were asked to name functional gaps in how the support of certain processes. Next to that case studies were done by randomly asking people in the organisation to perform tasks with the application. They only got a short introduction of the task and the reassurance they could do nothing wrong. So nothing about how the application worked was explained. After this users were observed completing the tasks, while they were invited to think-aloud. The moments when users hesitated or were in doubt, were explained as moments in the process the application should offer help. The outcomes of this test were thus: •)>> •)>> •)>>
Information on how the application was used Whether or not concepts and descriptions were interpreted as intended Functional problems
Insights in perceptions on what has happened by performing this task (“what will be the next steps in the organisation?”) These outcomes findings were incorporated in the improved version of the software. It would be valuable to perform such a test again now the build is complete, however at the moment there is no time available to do this. Demands for support and help options are not the same for every user, for instance because of the mentioned differences in IT-skills. One of the tools for help within the PeopleSoft application is the “See, Try, Know, Do” principle. Users can first look at a demonstration (see) before trying it themselves in a simulation mode (try). A next step is then to take a test to check if they understand everything (know), before finally actually performing the task with the application (do). Users can use one or more of these functions to support them in the use of the software. In choosing people for the tests information managers were asked if they could point out em-
1361
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
ployees that met certain criteria. One criterion for instance was whether or not they were very skilled in using IT. Although there were some criteria, no standard profiles were used to categorise users in groups. To confront these users with the improved application, during the development, prototypes were used.
Cross-Case Comparison Analysis The four case studies presented in the preceding sections have several things in common. First of all, they are of course all developments for a governmental organisation. Secondly, they clearly all support self-service on Human Resource processes. Thirdly, they all serve a large number of users, the HR self-service applications have a range from 5,000 to 80,000. The fourth parallel is that in all cases external organisations were hired to assist in the development, although in the case of the Ministry of Defence this was mostly limited to advisory reports. The previous sections also show that users are involved or are able to participate in the development of IT-systems in the Dutch government. Table 2 depicts the different ways users participated in the different cases that were discussed. It is clearly visible that users get most attention during the test phase of the development. In Table 2. Used methods to involve end-users Prototypes
Testing
Emplaza at the Ministry of the Interior And Kingdom Relationships
P-Direkt to be used throughout several Dutch Ministries
P-Loket at the Ministry Of Health, Welfare and Sport
PeopleSoft/HR at the Ministry Of Defence
1362
Useresearch
all of the discussed applications end-users have participated in one or more tests during the development. Logically, this testing found place in later phases, however in three cases prototypes were used to be able to show users (parts of) the application earlier in the process of development. Non-expert end-users that participated from the start of the projects were visible in two cases (PLoket at the Ministry Of Health Welfare and Sport and PeopleSoft/HR at the Ministry Of Defence). Most of the users that were engaged from the start had expert knowledge on the processes that were computerised by the implementation of the application.Of particular interest is how the participation of end-users might have influenced the success of the applications in question. Figure 3 has the different applications from the interviews depicted in a diagram that scores them on success (y-axis) and user participation (x-axis). Their position on both axes is based on the interviews, but is of course subjectively determined. It is not meant to imply that any of the applications is ‘better’ than the others. The concept of ‘application success’ is seen from the viewpoint of each organisation and is based on: •)>> •)>> •)>> •)>>
Time and effort needed for development (relative to the amount of features) Number of problems encountered during tests and implementation Satisfaction with the end-product Contribution to increase in efficiency of the supported tasks
As a final step to complete the four case studies among the Dutch Ministries, additional information about the (perceived) success of the ESSapplications was collected. This was conducted through one short personal e-mail sent to the interviewees, containing six system quality criteria for which an answer on a 5point scale was requested:
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
Figure 3. User participation and application success
•)>>
•)>>
•)>>
•)>>
•)>>
The number of problems reported by users in the user (acceptation) tests (ranging from 1 being “very few” to 5 being “seriously many”) The amount of rework needed after testing (ranging from 1 being “very little” to 5 being “very much”) Current satisfaction with the application by end-users (ranging from 1 being “very dissatisfied” to 5 being “very satisfied”) The amount of questions that reached the helpdesk shortly after implementation (ranging from 1 being “very few” to 5 being “seriously many”) Contribution of the application to the increase in satisfaction (ranging from 1 being “very little” to 5 being “very much”)
•)>>
The overall success of the application (ranging from 1 being “very low” to 5 being “very high”)
Table 3 shows the answers of the respondents that were queried for four different Ministries/ cases. Based on these additional data, the actual relation between early user involvement, satisfaction and application success can be estimated for the four cases under investigation. Before we do so, some remarks beforehand. First, it should e noted that for Emplaza both the latest version (4.3) and earlier versions were investigated (E1). During the interviews it became clear that in earlier versions a lot more user participation was applied. The short communication
Table 3. Scores by respondents to extra questions Emplaza at the Ministry of the Interior and Kingdom Relationships
PeopleSoft/HR at the Ministry Of Defence
P-Loket at the Ministry Of Health, Welfare and Sport
P-Direkt to be used throughout several Dutch Ministries
1 Reported problems after test
5
3
2
2
2 Amount of rework needed
5
4
4
1
3 Current satisfaction level
4
4
4
n/a
4 Questions at helpdesk
2
3
4
n/a
5 Contribution to efficiency
4
2
4
n/a
6 Overall success
3
2
4
n/a
1363
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
lines resulting from the programming team working on the same location as users, made sure users could be frequently consulted. At the deployment of Emplaza 4.3 end-users were involved, but not at all before a working product was developed. A number of problems arose in the development and testing of the 4.3 version. Some functionalities could for instance not be implemented on time, because tests by key users revealed to many hick-ups. User satisfaction with the functionalities that could be implemented on time however, was considered to be reasonable high. Also, it would not be fair to let the low score on participation of end-users to be the only reason for the low score on application success. There were numerous other problems mentioned that contributed to the difficult development of Emplaza 4.3, as described in section 3.2.1. Before these problems are solved however, earlier versions can be considered relatively more successful than the latest one. Secondly, it needs to be considered that the PeopleSoft/HR application also has earlier versions as ESS within the Ministry of Defence. In the use of the initial version users encountered too many problems. So, the second version was developed to be an improvement of the first version. A lot more attention for usability went hand in hand with the increasing possibilities for users to participate in the development. A lot of problems were therefore found, resulting in quite a lot of rework. Although not all difficulties for users could be solved the second version was considered to be superior to its predecessor. This thus shows in participation, so the second version is placed more to the right of the graph. However, the success positioning is not as high as might be expected with regard to the amount of user participation. This mainly has to do with the low scores on contribution to efficiency and the overall success rating of the application by the respondent. Thirdly, it appeared that the P-Direkt application to be used by several Ministries, is hard to compare with regard to its success as Table 3
1364
indicates. For questions 1 and 2, the scores by the respondent were given 2 and 1 respectively. The current application is still only partly implemented and used by two departments, while the goal is to use it at all government departments. Therefore questions 3 to 6 are not applicable in this case. Fourth and final, we need to clarify how we quantified user participation in order to score the four cases on this dimension and plot it against their application successes. We judged that the Ministry of Health, Welfare and Sports demonstrated relatively the most time and effort spent on user participation. For instance all managers were approached by demo sessions and invited to comment on the application. Although the different approaches used are less than with the Ministry of Defence, the relative amount of time spent is considered to be more, so the P-Loket application is considered to score higher on user participation than the second ESS-version at the Ministry of Defence. Besides the delay in the start of the project, not a lot of problems arose during the development of the application. Also most of the problems users experienced with the application were caught in the different tests during the development. The amount of rework to be done, was therefore considered ‘much’, but it could be done early in the development. Since the application introduced self-service, it contributed a lot to the efficiency of the organisation. From the interviews it became clear that users participated less in the cases of Emplaza 4.3 and the first version of the Ministry of Defence, therefore the are ordered at the lower end of this dimension. Given these remarks, the user participation and application scores for the different cases are plotted in Figure 3, recognizing that for two cases actually two measurements in time are included. Without claiming to have precise measurements and quantifications, Figure 3 clearly confirms that user participation is positively related to (perceived) application success. This is supported by both cross-sectional comparing the four cases, as well as comparing the two Emplaza and
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
PeopleSoft/HR cases over time. The implications of this hypothesized and convincing result are discussed in next closing section.
CONCLUSION AND DISCUSSION This paper departed from an analysis of current literature on information system success, user satisfaction and user involvement. A number of researches were found that described what factors influenced the success of information systems. From the DeLone & McLean IS Success Model the concepts of system quality, (intention to) use and user satisfaction were found to be important influential factors. Other findings mentioned the influence of perceived usefulness and perceived ease of use on user satisfaction and intention to use. Subsequently these concepts could be influenced by the involvement of users in the development process of new software applications. Next to this a distinction between the concepts of involvement and participation was suggested. Several findings of positive relationships between user participation, involvement and system success were presented. In the end the literature study was combined into a conceptual model. This model visualised the mentioned links between a number of concepts of the DeLone & McLean’s IS Success Model, the Technology Acceptance Model and User Involvement & Participation. The case studies portrayed are based on interviews with civil servants employed at different governmental organisations. One of the outcomes was a list of currently used user participation methods. In line with the findings from the literature study respondents also argued that users should be involved early, however, not too early because it would delay the development process too much. The challenges faced with involving users also did not deviate from the literature section. Investigating the cases points also to the positive effect of user participation on the success of an application. Projects that have users participating
in the development seem to be more successful than the ones that show less user participation. The only clearly visible exception is the case of the Ministry of Defence. It might be hard to compare the success of the different applications, however the differences between versions of applications are obvious (Emplaza and the Ministry of Defence).Besides this confirmation of the positive results of user participation, also a number of lessons can be learned when studying these cases. A number of hints and points of attention were even explicitly mentioned by the respondents with regard to user participation. A number of important hints are listed below: •)>>
•)>>
User participation requires time and a good schedule: It is important to think about the consequences of participating end-users. Input from users will need gathered and put in order, this takes time. Subsequently the results need to be analysed to for instance decide which requirements should be incorporated in the design or which findings should be solved. Using MOSCOW-lists, it is possible to rank requirements and suggestions in “must have”, “should have”, “could have” and “would have” items. To ensure the project stays on schedule, it is necessary to set deadlines when decisions need to be made, otherwise endless discussions might arise and requirements will keep changing. Also concerning scheduling is to make sure enduser are brought after some basic ideas are already thought of by the development team. Try to find motivated end-users and have something to show them: In choosing users to have participating in the project, try to find the ones that will be motivated to constructively think about the application. It might not be an easy task to do this in such large organisations, but the network of the project group or managers
1365
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
•)>>
•)>>
•)>>
1366
could be asked to produce a list of possible participators. To enable this set of end-users to come up with useful suggestions, it is wise to visualise parts of the application already. People will find it difficult to supply ideas without something they can see, even a simple mock-up will be fine to start a discussion. Keep in mind the overall process that needs to be supported or automated: The development of the self-service application itself is not the main goal of the project. There is a process or task that needs to be automated or supported. When designing, developing and testing always keep in mind this process or task. For instance observe users when executing a task to find out what other processes might be linked to this task. Or, when testing, ask the test person about his or her perceptions on what has happened and what the next step in the process will be. Development team on location: Having the development team close to end-users, for instance on location, shortens the communication lines. This enables more frequent consultation between end-users and programmers concerning for instance uncertainties about requirements or just asking user’s opinions on what has been developed thus far. Being able to follow the progress more easily will also positively influence the involvement of end-users. Expectancy management: Make sure to tell participating users what will be done with their input and why. Not all of their suggestions and problems might be implemented or solved. In ensuring they maintain willing to cooperate it is important to communicate why certain decisions have been made and why some of their input is not visible in the developed application.
A number of additional lessons were mentioned by the respondents. It was mentioned to keep in mind employees should be able to use the self-service applications without too much support from the HR-departments. Otherwise it would only be a shift of the workload for the HR-department from HR-tasks to supporting users with the use of the application. In that case the organisations would not show the targeted improvements in efficiency. A third important aspect to take into account is the distinct decision process organisations like the Ministry have. This was a confirmation of one of the points in the report of The Netherlands Court of Audit. The decision process includes fairly a lot of people, takes considerable time and can be politically oriented.
REFERENCES Barki, H., & Hartwick, J. (1989). Rethinking the concept of user involvement. Rethinking the Concept of User Involvement, 13(1), 53–63. Baroudi, J. J., Olson, M. H., & Ives, B. (1986). An empirical study of the. Communications of the ACM, 29(3), 232–238. doi:10.1145/5666.5669 Beyer, H., & Holtzblatt, K. (1999). Contextual design. Interaction, 6(1), 32–42. doi:10.1145/291224.291229 Blackburn, J., Scudder, G., & Van Wassenhove, L. N. (2000). Concurrent software development. Communications of the ACM, 43(11), 200–214. doi:10.1145/352515.352519 Blomberg, J., Giacomi, J., Mosher, A., & Swenton-Hall, P. (1993). Ethnographic field methods and their relation to design. In D. Schuler, & A. Namioka (Eds.), Participatory Design: Principles and Practices (pp. 123-155). Hillsdale: Lawrence Erlbaum.
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
Blomquist, Å., & Arvola, M. (2002). Personas in action: ethnography in an interaction design team. Proceedings of the second Nordic conference on Human-computer interaction (pp. 197 - 200). ACM. Borland. (2006). Retrieved April 25, 2007, from http://www.borland.com/resources/en/pdf/solutions/rdm_whitepaper.pdf Brown, D. (1996). The challenges of user-based design in a medical equipment market. In D. Wixon, & J. Ramey (Ed.), Field Methods Casebook for Software Design (pp. 157-176). New York: Wiley. Butler, M. B. (1996). Getting to know your users: usability roundtables at lotus development. Interaction, 3(1), 23–30. doi:10.1145/223500.223507 Chatzoglou, P. D., & Macaulay, L. A. (1996). Requirements capture and analysis: a survey of current practice. Requirements Engineering, (2): 75–87. doi:10.1007/BF01235903 Chatzoglou, P. D., & Macaulay, L. A. (1996). Requirements capture and analysis: a survey of current practice. Requirements Engineering, 1(2), 75–87. doi:10.1007/BF01235903 Cherry, C., & Macredie, R. D. (1999). The importance of context in information system design: an assesment of participatory design. Requirements Engineering, 4(2), 103–114. doi:10.1007/ s007660050017 Clement, A., & Van den Besselaar, P. (1993). A retrospective look at PD projects. Communications of the ACM, 36(6), 29–37. doi:10.1145/153571.163264 Damodaran, L. (1996). User involvement in the systems design process-a practical guide for users. Behaviour & Information Technology, 15(6), 363–377. doi:10.1080/014492996120049
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. doi:10.2307/249008 DeLone, W. H., & McLean, E. R. (1992). Information system success: The quest for the independent variable. Information Systems Research, 3(1), 60–95. doi:10.1287/isre.3.1.60 DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information success: a ten-year update. Journal of Management Information Systems, 19(4), 9–30. Dutch Court of Audit. (2007). Lessons from IT-projects at the government [Lessen uit ICTprojecten bij de overheid] Retrieved from http:// www.rekenkamer.nl/9282000/d/p425_rapport1. pdf. Fister Gale, S. (2003). Three stories of self-service success. Workforce, 82(1), 60–63. Gibson, H. (1977). Determining User Involvement. Journal of System Management, 20-22. Gould, J. D., & Lewis, C. (1985). Designing for usability: key principles and what designers think. Communications of the ACM, 28(3), 300–311. doi:10.1145/3166.3170 Grudin, J. (1991). Systematic sources of suboptimal interface design in large product development organization. Human-Computer Interaction, 6(2), 147–196. doi:10.1207/s15327051hci0602_3 Ives, B., & Olson, M. H. (1984). User involvement and mis success: a review of research. Management Science, 30(5), 586–603. doi:10.1287/ mnsc.30.5.586 Ives, B., Olson, M. H., & Baroudi, J. J. (1983). The measurement of user information satisfaction. Communications of the ACM, 26(10), 785–793. doi:10.1145/358413.358430
1367
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
Janssen, M., & Joha, A. (2006). Motives for establishing shared service centers in public administrations. International Journal of Information Management, 26(2), 102–115. doi:10.1016/j. ijinfomgt.2005.11.006 Kensing, F., & Blomberg, J. (1998). Participatory Design: Issues and Concerns. Computer Supported Cooperative Work, 7(3/4), 167–185. doi:10.1023/A:1008689307411 Konradt, U., Christophersen, T., & SchaefferKuelz, U. (2006). Predicting user satisfaction, strain and system usage of employee self-services. International Journal of Human-Computer Studies, 64(11), 1141–1153. doi:10.1016/j.ijhcs.2006.07.001 Kujala, S. (2003). User involvement: a review of the benefits and challenges. Behaviour & Information Technology, 22(1), 1–16. doi:10.1080/01449290301782 Lin, W. T., & Shao, B. B. (2000). The relationship between user participation and system success: a simultaneous contingency approach. Information & Management, 37(6), 283–295. doi:10.1016/ S0378-7206(99)00055-5 Littlejohn, S. W. (2002). Theories of Human Communication. Belmont: Wadsworth/Thomson Learning. Lucas, H. J. (1974). Systems quality, user reactions, and the use of information systems. Management Informatics, 3(4), 207–212. Mantei, M. M., & Teorey, T. J. (1988). Cost/ benefit analysis for incorporating human factors in the software lifecycle. Communications of the ACM, 31(4), 428–439. doi:10.1145/42404.42408 Mason, R. O. (1978). Measuring information output: A communication systems approach. Information & Management, 1(4), 219–234. doi:10.1016/0378-7206(78)90028-9
1368
McKeen, J. D., & Guimaraes, T. (1997). Successful strategies for user participation in systems development. Journal of Management Information Systems, 14(2), 133–150. McKeen, J. D., Guimaraes, T., & Wetherbe, J. C. (1994). The Relationship between user participation and user satisfaction: an investigation of four contingency factors. MIS Quarterly, 18(4), 427–451. doi:10.2307/249523 Ministerie van Binnenlandse Zaken en Koninkrijksrelaties. (2007). Nota Vernieuwing Rijksdienst. Retrieved Februari 08, 2008, from http://www. minbzk.nl/aspx/download.aspx?file=/contents/ pages/89897/notavernieuwingrijksdienst.pdf Ministry of Internal Affairs. (2006). Press statement Retrieved July 21, 2007, from P-Direkt: http://www.p-direkt.nl/index.cfm?action=dsp_ actueelitem&itemid=QKNGJL8E. Muller, M. (2001). A participatory poster of participatory methods. Conference on Human Factors in Computing Systems, CHI ‘01 extended abstracts on Human factors in computing systems, (pp. 99 - 100). Nielsen, J. (1993). Usability Engineering. San Diego: Academic Press. Noyes, P. M., Starr, A. F., & Frankish, C. R. (1996). User involvement in the early stages of an aircraft warning system. Behaviour & Information Technology, 15(2), 67–75. doi:10.1080/014492996120274 Pekkola, S., Niina, K., & Pasi, P. (2006). Towards Formalised End-User Participation in Information Systems Development Process: Bridging the Gap between Participatory Design and ISD Methodologies. Proceedings of the ninth Participatory Design Conference 2006, 21-30.
Early User Involvement and Participation in Employee ↜Self-Service Application Deployment
Robey, D., & Farrow, D. (1982). User Involvement in Information System Development: A Conflict Model and Empirical Test. Management Science, 28(1), 73–85. doi:10.1287/mnsc.28.1.73 Rowley, D. E. (1996). Organizational considerations in field-oriented product development: Experiences of a cross-functional team. In D. Wixon, & J. Ramey (Eds.), Methods Casebook for Software Design (pp. 125 - 144). New York: Wiley. Ruël, H., Bondarouk, T., & Looise, J. K. (2004). E-HRM: innovation or irritation. an explorative empirical study in five large companies on Web-based HRM. Management Review, 15(3), 364–380. Saarinen, T. (1996). SOS An expanded instrument for evaluating information system success. Information & Management, 31(2), 103–118. doi:10.1016/S0378-7206(96)01075-0 Shannon, C. E., & Weaver, W. (1949). The mathematical theory of communication. Urbana: University of Illinois Press. Shapiro, D. (2005). Participatory design: the will to succeed. Proceedings of the 4th decennial conference on Critical computing: between sense and sensibility, (pp. 29-38). Aarhus, Denmark. Simonsen, J., & Kensing, F. (1997). Using ethnography in contextural design. Communications of the ACM, 40(7), 82–88. doi:10.1145/256175.256190 Strohmeier, S. (2007). Research in e-HRM: Review and implications. Human Resource Management Review, 17(1), 19–37. doi:10.1016/j. hrmr.2006.11.002
Verheijen, T. (2007). Gestrikt: E-HRM komt er uiteindelijk toch. Personeelsbeleid, 43(11), 20–23. Wood, L. E. (1997). Semi-structured interviewing for user-centered design. Interaction, 4(2), 48–61. doi:10.1145/245129.245134
KEY TERMS AND DEFINITIONS Application Deployment: The adoption, implementation and usage of an information system or IT application within the context of a organization Employee Self-Service: Corporate web portal that enables managers and employees to view, create and maintain relevant personnel information Shared Service Centres (SSC): Newly created organization units, mostly implemented in large organizations to centralize supportive activities as administration, facilities, HR and IT-services Semi-Structured (topic) Interviews: Case study method to collect qualitative and/or intangible data by questioning pre-selected respondents in a non-conditional, informal setting. User Participation: Assignments, activities, and behaviors that users or their representatives perform during the systems development process User Involvement: A subjective psychological state reflecting the importance and personal relevance that a user attaches to a given system User Satisfaction: Subjective judgment of the information system by the user. Used as a measure for the success of an information system
This work was previously published in Handbook of Research on E-Transformation and Human Resources Management Technologies: Organizational Outcomes and Challenges, edited by Tanya Bondarouk, Huub Ruel, Karine Guiderdoni-Jourdain and Ewan Oiry, pp. 56-77, copyright 2009 by Information Science Reference (an imprint of IGI Global).
1369
1370
Chapter 5.13
Assessing Information Technology Capability vs. Human Resource Information System Utilization Ralf Burbach Institute of Technology Carlow, Ireland Tony Dundon National University of Ireland-Galway, Ireland
Introduction The ever-increasing capabilities of human resource information technology (HRIT) and human resource information systems (HRIS) have presented HR departments with an opportunity to generate and analyze vast amounts of employee information that could potentially be used for strategic decision-making purposes and to add value to the HR department and ultimately the entire organization. Research in this area has frequently highlighted that most organizations merely deploy HRIT to automate routine administrative tasks. In general, these studies assume the existence of IT capabilities and sophistication without further investigating what these consist of DOI: 10.4018/978-1-59904-883-3.ch009
and how or whether existing IT capabilities could be related to the different uses of HR information, that is, strategic decision-making as opposed to automation. In this article, we introduce and discuss a model that aids the categorization of firms regarding their HRIT capabilities vs. their use of HR information. Furthermore, we will explore the factors that determine the utilization of HR information for strategic decision-making purposes.
Background As organizations continuously thrive to become more competitive and to reduce operating costs, the pressure on the human resource (HR) function to add value to the organization is mounting. A growing number of organizations are introduc-
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Assessing Information Technology Capability vs. Human Resource Information System Utilization
ing a variety of information and communication technologies (ICT), including interactive voice response (IVR), intranet, self-service HR kiosks, e-HRM (electronic human resource management), and HRIS to achieve just that. The consultation of any practitioner based journal or consultancy reports on that topic will illustrate the range of technologies available, the array of applications, and the plethora of providers of these technologies. We will refer to these HR related ICTs as HRIT throughout the article to simplify matters. Some authors have suggested that the introduction of these types of technologies have presented HR departments with an opportunity to evolve from cost centers into profit centers (Bussler & Davis, 2001; Groe, Pyle, & Jamrong, 1996; Hannon, Jelf, & Brandes, 1996). Others have advocated that IT has the ability to revolutionize the HR function and to transform it into a strategic business unit (Broderick & Boudreau, 1992; Lepak & Snell, 1998). Yet, research has shown that most organizations appear to actuate technology merely to automate routine administrative tasks (Ball, 2001; Groe et al., 1996; Kinnie & Arthurs, 1996; Yeung & Brockbank, 1995). A host of authors from different disciplines, for instance HR management, strategic management and IT management, have attempted to classify HRIT applications by their purpose, that is, automation vs. strategic decision-making. A number
of these classifications are presented in Table 1. Upon observation, it becomes evident that most of these categorizations derive from Anthony’s (1965) three levels of management—operational, managerial, and strategic. On a timescale of decision-making, the functional dimension is considered to be short-term, the managerial to be intermediate and the strategic to be long-term, respectively. Zuboff (1988), for instance, refers to three increasing levels of IT utilization—“automating,” “informating,” and “transformating”—whereby “informating” entails generating information and using this information to support strategic decision-making, while “transformating” refers to the complete transformation of the organization utilizing IT. Kavanagh, Gueutal, and Tannenbaum (1990) place HRIT utilization on a continuum from file storage to decision making (Table 1). These categories are mirrored in Broderick and Boudreau’s (1992) classification. Beckers and Bsat (2002) propose a five-step decision support system (DSS) classification model to assess whether an HRIS can provide an organization with a competitive advantage, while Martinsons (1994) simply divides HRIT usage into sophisticated and unsophisticated applications. Hendrickson (2003) asserts that HRIS utilization could potentially lead to increases in efficiency and effectiveness, while also enabling activities that
Table 1. HRIT classification summary Three Levels of Management (Anthony, 1965)
Operational
Managerial
Strategic
Classifications of HRIT Applications in the Literature Zuboff (1988)
“automating”
“informating”
“transformating”
Kavanagh et al. (1990)
electronic data processing
management information systems
decision support systems
Broderick et al. (1992)
transaction processing
expert advice
decision support
Beckers et al. (2002)
Martinsons (1994)
MIS
DSS Unsophisticated
Group DSS
Expert Systems (ES)
Artificial Intelligence
sophisticated
1371
Assessing Information Technology Capability vs. Human Resource Information System Utilization
would not have been feasible prior to the introduction of IT. These varying but by no means dissimilar categorizations of HRIT applications can be converged into a single framework. This framework essentially consists of a continuum (as suggested by Kavanagh et al., 1990) with administrative use at one extreme and strategic use at the other (see Figure 1). For the purposes of this article, therefore, administrative use refers to routine data storage and processing, whereas strategic use denotes strategic planning (recruitment and selection, training and development or HR planning) and decision-making activities. The literature suggests that firms can be located somewhere along this continuum consistent with the manner in which they employ HRIT. Having identified what we refer to as strategic use of HRIT, we will now proceed to impart what can be understood by the term HRIT capability. To achieve this we will draw on the information systems literature.(Table 1) An organization’s tangible IT infrastructure is made up of hardware, software, networks, and databases. Porter and Millar (1985), however, claim that the term “information technology” reaches beyond computers to comprise all information generated within an organization as well as the totality of technologies used to acquire, manage and distribute this information. Organizations diverge considerably in their level of IT capabilities depending on a host of factors including company size and industry sector. In terms of HRIT, it is the level to which and the tasks for which firms Figure 1. HRIT application continuum
1372
employ technology that determine the category of user an organization falls under. Information technology has frequently been implicated as the catalyst, facilitator, and even basis for sustained competitive advantage (Barney, Wright, & Ketchen, 2001; Parsons, 1984; Porter & Millar, 1985). Other authors have emphasized the strategic importance of IT in leveraging human resources (Ulrich, 2000; Yeung & Brockbank, 1995), which itself is often considered one of the main sources of competitive advantage (Grant, 1991, 1996; Prahalad & Hamel, 1990). Subsequently, Tansley, Newell, and Williams (2001) feel that HR professionals need to accumulate and have access to key information about employees to capitalize on their skills and competencies. This short discourse serves to underline the need for an adequate IT infrastructure and relevant IT skills and knowledge to capitalize on IT and available information (Bharadwaj, 2000). While the purpose of and manner in which IT is employed can be assessed with the horizontal HRIT application continuum depicted in Figure 1, an additional measure is required with which to evaluate existing IT capabilities. The vertical continuum in Figure 2 is instrumental in determining these capabilities. Similar to the horizontal continuum, firms could be located somewhere along the vertical axis. Firms with low IT capability could be grouped at the lower end, and those with high IT capability at the upper end of the scale. The continua in Figure 1 and Figure 2 combined form a two-by-two matrix, which we have termed human resource information technology utilization matrix (see Figure 3). The matrix is divided
Assessing Information Technology Capability vs. Human Resource Information System Utilization
Figure 2. IT Capability
into four quadrants aimed to assess both dimensions simultaneously. It is suggested here that companies could be placed in either of these quadrants depending on their IT capability, on the one hand, and the extent to which they employ this technology for administrative or strategic purposes on the other. The major limitations of this model are that numerous factors exist that affect either or both dimensions and that it cannot prescribe individual solutions to organizations that underperform on either scale. However, the model does highlight on which continuum a firm ought to improve in order to make most of avail-
able technology with regard to analyzing HR information for strategic-decision making purposes. In order to apply the matrix we need to establish how organizations in the quadrants can be categorized.
Factors that Determine the Utilization of HR Information Firms with no or few computers that merely hold basic personnel data—in other words an “electronic filing cabinet,” could be grouped in
Figure 3. HRIT utilization matrix
1373
Assessing Information Technology Capability vs. Human Resource Information System Utilization
the bottom left quadrant labeled “AL” (Administrative application and Low IT capability, see Figure 3). Kovach and Cathcart (1999) argue that HRIS need not be computerized to serve strategic purposes. Thus, the bottom right quadrant tagged “SL” (Strategic application and Low IT capability) categorizes businesses with little or no IT capacity that indulge in information management with the aim of utilizing this information for planning and decision making purposes, which are aligned with and support the overall business strategy. One would expect only a small number of firms in this category, in view of rapid diffusion of IT in organizations. Enterprises in the upper left quadrant, labeled “AH” (Administrative application and High IT capability), possess state of the art ICT, but merely avail of its automating and cost reduction capabilities without following any explicit cost reduction strategy. Finally the upper right hand quadrant, “SH” (Strategic application and High IT capability), is reserved for those organizations that capitalize fully on the strategic abilities of an innovative fully integrated enterprise wide system where all stakeholders would have access to and make use of its decision support, knowledge management and expert system capabilities. Although these are somewhat ideal notions of how firms could be categorized, most enterprises are expected to reside somewhere near the centre, with few companies at the extremes, but still within distinct quadrants and with scope for improvement in either or both of the dimensions, that is, in terms of their IT capability and their strategic inclination. Research in the area has repeatedly criticized the continuing administrative use and underutilization of HR technology for strategic purposes (Ball, 2001; Kinnie & Arthurs, 1996). Some authors have suggested reasons for HRIS underutilization (Burbach & Dundon, 2005; Kavanagh et al., 1990; Kinnie & Arthurs, 1996). Research evidence suggests that larger organizations and those with an established HR Department are more likely to use their system strategically. Inner-
1374
organizational power and politics and a desire to preserve the status quo are additional factors that might mitigate HRIS utilization. Other reasons for the continued strategic use of HRIS include HR practitioners’ lack of IT skills and lack of senior management commitment to capitalize on system capabilities. Burbach and Dundon (2005) have also identified the lack of employee involvement in the HRIS implementation process as a barrier to full HRIS utilization and satisfaction with the existing HRIS. This seems to suggest that high IT capability does not inadvertently lead to strategic applications of that technology, which is a point made extensively in the literature. A number of authors advocate that capital investments in IT alone cannot guarantee its strategic application (Davenport, 1994, Miller & Cardy, 2000; Porter & Millar, 1985). One could still argue, however, that at least moderate IT capabilities are required to capitalize on the managerial and executive decision making potential of technology, given the implied need for sufficient processing power to achieve these aims. Thus, the literature seems to suggest that the majority of organizations could in fact be located in the AH quadrant of our model, that is, organizations possess the necessary IT infrastructure but fail to capitalize on its strategic decision making potential.
Future Trends It can reasonably be expected that the general level of IT capabilities in firms continues to rise steadily just as availability and sophistication of HRIT will increase. However, as the matrix suggests this fact alone does not guarantee the strategic use of technology. Therefore, it can be assumed that the number of organizations in the AH quadrant will increase at the expense of numbers in the AL quadrant. Nevertheless, there is ample scope for organizations to progress into the SH quadrant, if they, inter alia, keep abreast of developments in HRIT, for instance self-service
Assessing Information Technology Capability vs. Human Resource Information System Utilization
technology, and educate HRIT users in the usage of this technology (Burbach & Dundon, 2005). In addition, organizations should reinforce the importance of using HR information strategically among their stakeholders in order to become a strategic-business unit, strive to add value to the organization and ensure sure that existing and new technology is used to its full potential.
Conclusion Research on the use of IT and HRIS in HRM has repeatedly reaffirmed that existing technologies continue to be employed for “automating” activities rather than “informating” (Ball, 2001; Zuboff, 1988). Nevertheless, transaction processing or automation is not necessarily negative. Kinnie and Arthurs (1996) feel that these kinds of applications could be beneficial to an organization contingent on its circumstances, while Zuboff (1988) asserts that organizations need to automate processes before they can commence “informating.” As more and more organization realize the importance of their human resources and subsequently the strategic value of analyzing HR information, many of these firms will move beyond the operational decision making level towards managerial decision-making, especially if above average IT capability is in place. Ball (2001) ascertained in her study that those organizations that have the necessary capabilities to exploit the analytical potential of their HRIS, given their organizational context, would do so. We propose that the level of existing IT capabilities is one of the determinants of strategic or “sophisticated” applications of technology (Martinsons, 1994). Nevertheless, IT capabilities and the use of HRIS in particular, are not an essential condition for, but could be considered a key catalyst for, analytical decision support. Authors such as Davenport (1994) and Soliman and Spooner (2000), however, emphasize that desired improvements in information management cannot be attained by
capital investments in IT alone. As research has shown, appropriate IT knowledge and skills is an essential ingredient in the strategic application of HRIT (Burbach & Dundon, 2005). Thus, our discussion endorses suggestions that IT could, in theory, be instrumental to transforming HR into a strategic business unit (Hannon et al., 1996; Stroh & Caligiuri, 1998; Tansley & Watson, 2000; Twomey & Harris, 2000; Yeung, Brockbank, & Ulrich, 1994).
References Anthony, R. (1965). Planning and control systems: A framework for analysis. Division of Research. Graduate School of Business Administration. Harvard University: Boston, MA. Ball, K. S. (2001). The use of human resource information systems: A survey. Personnel Review, 30(6), 677–693. doi:10.1108/ EUM0000000005979 Barney, J. (1991). Firm resources and sustained competitive advantage. Journal of Management, 17(1), 99–120. doi:10.1177/014920639101700108 Barney, J. B., Wright, M., Ketchen, D., & Jr, J. (2001). The resource-based view of the firm: Ten years after 1991. Journal of Management, 27, 625–641. doi:10.1177/014920630102700601 Bharadwaj, A. S. (2000). A resource-based perspective on information technology capability and firm performance: An empirical investigation. MIS Quarterly, 24, 169–196. doi:10.2307/3250983 Broderick, R., & Boudreau, J. W. (1992). HRM, IT and the competitive edge. Academy of Management Executive, 6(2), 7–17. Burbach, R., & Dundon, T. (2005). The strategic potential of human resource information systems: Evidence from the Republic of Ireland. International Employment Relations Review, 11(1/2), 97–118.
1375
Assessing Information Technology Capability vs. Human Resource Information System Utilization
Bussler, L., & Davis, E. (2001). Information systems: The quiet revolution in human resource management. Journal of Computer Information Systems, 42(2), 17–20.
Kinnie, N. J., & Arthurs, A. J. (1996). Personnel specialists’ advanced use of information technology—evidence and explanations. Personnel Review, 25(3), 3–19. doi:10.1108/00483489610147933
Davenport, T. H. (1994). Saving IT’s soul: Humancentered information management. Harvard Business Review, 72(2), 119–121.
Kovach, K. A., & Cathcart, C. E. (1999). Human resource information systems (HRIS): Providing business with rapid data access. Information Exchange and Strategic Advantage. Public Personnel Management, 28(2), 275–282.
DeSanctis, G. (1986). Human resource information systems: A current assessment. MIS Quarterly, 10(1), 15–27. doi:10.2307/248875 Grant, R. M. (1991). The resource-based theory of competitive advantage: Implications for strategy formulation. California Management Review, 33(3), 114–135. Grant, R. M. (1996). Toward a knowledge-based theory of the firm. Strategic Management Journal, 17(10), 109–122. Groe, G. M., Pyle, W., & Jamrong, J. (1996). Information technology and HR. Human Resource Planning, 19(1), 56–60. Hannon, J., Jelf, G., & Brandes, D. (1996). Human resource information systems: Operational issues and strategic considerations in a global environment. International Journal of Human Resource Management, 7(1), 245–269. Hendrickson, A. R. (2003). Human resource information systems: Backbone technology of contemporary human resources. Journal of Labor Research, 24(3), 381–394. doi:10.1007/s12122003-1002-5 Hendry, C., & Pettigrew, A. M. (1986). The practice of strategic human resource management. Personnel Review, 15(3), 3–8. doi:10.1108/eb055547 Kavanagh, M. J., Gueutal, H., & Tannenbaum, S. (1990). Human resource information systems: Development and application. Boston: PWS Kent Publishing Company.
1376
Lepak, D. P., & Snell, S. A. (1998). Virtual HR: Strategic human resource management in the 21st Century. Human Resource Management Review, 8(3), 215–234. doi:10.1016/S10534822(98)90003-1 Martinsons, M. G. (1994). Benchmarking human resource information systems in Canada and Hong Kong. Information & Management, 26(6), 305–316. doi:10.1016/0378-7206(94)90028-0 Miller, J. S., & Cardy, R. L. (2000). Technology and managing people: Keeping the “Human” in human resources. Journal of Labor Research, 21(3), 447–461. doi:10.1007/s12122-000-1020-5 Parsons, G. L. (1984). Information technology: A new competitive weapon. McKinsey Quarterly, 84(2), 45–60. Porter, M. E., & Millar, V. E. (1980). Competitive strategy. New York: Free Press. Prahalad, C. K., & Hamel, G. (1990, May/June). The core competencies of the corporation. Harvard Business Review, 79–91. Soliman, F., & Spooner, K. (2000). Strategies for implementing knowledge management: Role of human resources management. Journal of Knowledge Management, 4(4), 337–345. doi:10.1108/13673270010379894 Stroh, L. K., & Caligiuri, P. M. (1998). Increasing global competitiveness through effective people management. Journal of World Business, 33(1), 1–15. doi:10.1016/S1090-9516(98)80001-1
Assessing Information Technology Capability vs. Human Resource Information System Utilization
Tansley, C., Newell, S., & Williams, H. (2001). Effecting HRM-style practices through an integrated human resource information system: An e-Greenfield site? Personnel Review, 30(3), 351–370. doi:10.1108/00483480110385870 Tansley, C., & Watson, T. (2000). Strategic exchange in the development of human resource information systems (HRIS). New Technology, Work and Employment, 15(2), 108–122. doi:10.1111/1468-005X.00068 Tichy, N., Fombrun, C., & Devanna, M. A. (1981). Human resources management: A strategic perspective. Organizational Dynamics, 9(3), 51–67. doi:10.1016/0090-2616(81)90038-3 Twomey, D. F., & Harris, D. L. (2000). From strategy to corporate outcomes: Aligning human resource management systems with entrepreneurial intent. International Journal of Commerce & Management, 10(3/4), 43–55. doi:10.1108/ eb047408 Ulrich, D. (2000). From e-Business to e-HR. Human Resource Planning, 23(2), 12–21. Yeung, A., & Brockbank, W. (1995). Reengineering HR through information technology. Human Resource Planning, 18(2), 24–37. Yeung, A., Brockbank, W., & Ulrich, D. (1994). Lower cost, higher value: Human resource function in transformation. Human Resource Planning, 17(3), 1–15. Zuboff, S. (1988). In the age of the smart machine. Heinemann: London.
KEY TERMS AND DEFINITIONS Administrative Use of Human Resource Information Technology: Use of HRIT for high-volume, transaction processing, non-value adding activities.
Decision Support Systems (DSS): Systems designed to facilitate senior management decision making in the long-term and relate to the overall mission and objectives of an organization and are based on ‘what if’ scenarios. E-Human Resource Management (e-HRM): Provision of HR related services via the internet, usually a company intranet. Electronic Data Processing (EDP): Processing of routine information purely at an operational level dedicated to processing payroll and basic employee data. Enterprise Resource Planning (ERP) System: An ERP system is defined as an enterprise wide system, which fully integrates all functional areas in a business, for example finance, production, marketing, and HRM. Human Resource Information System (HRIS): Specialized HR software applications used to collect, store, analyze, and disseminate HR related information. Human Resource Information Technology (HRIT): Any type of information and communication technology used to facilitate human resource service provision (interactive voice response, telephone dialing options, facsimile, video conferencing, personal computers, computer kiosks, intranet/internet, and human resource information systems). Human Resource (HR) Self-Service: To share HR specific information with employees or to enable employees to update their personnel records, commonly through a company intranet. Information and Communication Technology (ICT): Umbrella term describing the composite of information technologies and communication technologies. Frequently used interchangeably with information technology. Management Information System (MIS): System aimed at decision making at the middle management level, for instance budgetary information, time and attendance analysis, ad hoc reports, and projections.
1377
Assessing Information Technology Capability vs. Human Resource Information System Utilization
Self-Service Kiosk: A booth or terminal that acts as an access point to computer related human resource services. These may also be used to communicate with and collect information from employees from remote locations or where it is unfeasible to provide individual workstations for every employee.
Strategic Use of Human Resource Information Technology: Use of HRIT to assist strategic decision-making processes based on the systematic analysis of human resources data.
This work was previously published in Encyclopedia of Human Resources Information Systems: Challenges in e-HRM, edited by Teresa Torres-Coronas, and Mario Arias-Oliva, pp. 56-62, copyright 2009 by IGI Publishing (an imprint of IGI Global).
1378
1379
Chapter 5.14
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations Tanya Bondarouk University of Twente, The Netherlands Vincent ter Horst Saxion Knowledge Center Innovation and Entrepreneurship, The Netherlands Sander Engbers COGAS BV. Business Unit Infra & Networkmanagement, The Netherlands
ABSTRACT
INTRODUCTION
This research focuses on acceptance of Human Resource Information Systems (HRIS) in medium sized organizations. We look at general SME’s in The Netherlands. The goal of this research is to analyse the perceptions about the use of HRIS as there is currently very little knowledge about it in “medium sized organizations”. To support the explorative nature of the research question, four case studies were selected in organizations that were using HRIS. Overall we conclude that the use of etools in medium sized organizations is perceived as useful, whereas not easy to use. The organizations involved perceive that the use of HRIS helps them to make HRM more effective.
Nowadays IT supports critical HRM functions such as recruitment, selection, benefits management, training, and performance appraisal (Grensing-Pophal, 2001). During last decade large organizations have for several reasons implemented an increasing number of electronic Human Resource Management (e-HRM) solutions (Ruël et al., 2007; Ruta, 2005; Voermans and Veldhoven, 2007). While large organizations are no longer surprised by e-HRM, SME’s are at the early stage of its adoption. Our knowledge about HRM practices in SME’s is odd (Huselid, 2003), but even less – about e-enabled HRM in SME’s. Only during the last couple of years research into HRM in SME’s has taken off. According to Heneman et al. (2000), scholars are lamenting the dearth of information about HRM
DOI: 10.4018/978-1-60566-304-3.ch018
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
practices in SME’s as the existing HRM theories are often developed and tested in large organizations. At the same time, there is a serious need in the development/ application of HRM concepts in the environment of SME’s as well-motivated and well-trained workers are probably the most important assets for smaller companies to stay competitive (Huselid, 2003). De Kok (2003) puts forward two main arguments which justify the specific attention for SME’s: firstly, SME’s form a large and vital part of modern economies; and secondly, despite the heterogeneous character of the SME sector, SME’s differentiate from large organisations in many respects. This bias in HR research is of course understandable. Larger companies have the resources and people available to implement and perform state of the art HR policies and practices and are thus more exciting research playgrounds. They usually have more, and more sophisticated HR in place. But neglecting SME’s is inconvenient, given their position in most economies. In the US for instance, 99,7% of all companies have fewer than 500 employees (the US definition of SME’s is all companies with fewer than 500 employees), a startling 78,8% have fewer than 10 employees (Heneman et al. 2000). The European definition of SME’s is companies with fewer than 250 employees. If we use that definition, in the Netherlands 714.000 out of a grand total of 717.035 companies are SME’s. Only about 1300 companies have more than 500 employees, whereas about 386.890 have no employees at all (Van Riemsdijk and Bondarouk, 2005). According to the Dutch organisation for SME’s companies with up to 250 employees (99% of all Dutch companies) provide 2.8 million jobs, more than half of the total of 4.8 million jobs in the Dutch private sector. 48% of the added value and 53% of yearly turnover of this sector is generated by these small and medium sized companies (Meijaard et al., 2002). Yet good personnel management seems at least as important for small companies as for larger ones, and owners/directors of small companies
1380
are well aware of that. Indeed, next to general management issues, personnel policies are seen as the most important aspect of management by owners/directors of smaller companies (Hess, 1987 in Hornsby and Kurato, 1990). At the same time top management in smaller companies find personnel policy issues both difficult and frustrating (Verser, 1987, cited in Hornsby & Kurato, 1990). In other words, many owners/ directors of SME’s do find HR important enough to occupy themselves with it directly, but at the same time find it very hard to address the issue in a proper way and could use some help on the topic. Concluding, what is known is that SME’s account for a significant proportion of employment in different countries, and that the owners/ managers of SME’s have high interest in personnel issues. However, what is not known is how people are managed in these firms, and to which extent information technologies play a role in people management (if any). The number of articles looking at the HRM issues in small business is increasing (Special Issues in the Human Resource Management Review in 1999 and 2006 are good evidence for the growing interest in it). However, as Heneman et al (2000) and Cardon and Stevens (2004) have shown, the progress has been slow. Research into electronic HRM did not touch SME’s yet. There is no clarity whether SME’s use e-HRM applications and for what purposes, what the full advantages of e-HRM for SME’s are (if any), and to what extent e-HRM improves HR processes. It is hard to accurately capture the extent of e-HRM usage by SME’s as the data on e-HRM practices is primarily based on research in large organizations. Because of the very important position of SME’s in our economies, at one hand, but our limited understanding of HRM and specifically e-HRM in SME’s, at another hand, we started an explorative research into e-HRM in Dutch SME’s. Our goal was to analyse the use of e-HRM tools in SME’s, and compare it with what we know about e-HRM large organizations. Research question,
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
therefore was formulated as “How do SME’s perceive the use of e-HRM tools?” This research has been done in cooperation with Unit 4 Agresso. Unit 4 Agresso is a software company that offers to the market IT solutions for the HRM administrative processes.
THEORETICAL FRAMEWORK In order to build a theoretical guideline for an empirical exploration, we need first to crystallize characteristics of HRM processes in SME’s.
Human Resource Management in SME’s It is widely acknowledged that larger organizations benefit from a formalization of the HRM practices as they are more focused on standardizing tasks and formalize HR practices to be more efficient (for example in recruiting). Contrary to that, small organizations are expected to benefit from informal HR practices as they could create jobs around unique experience, skills, knowledge and interests of employees. From empirical research we learn that especially for the formalisation of HR policies, size is very important. Hornsby and Kurato (1990) in their research on 247 US SME’s made a division into three ‘size-classes’; 1-50 (53% of their population), 51-100 (22%), 101-150 (25%) employees based on the assumption that with increasing size personnel policies would grow more complex and more formalised. It became clear that the sophistication of the personnel activities was directly linked with size: the bigger the company, the more sophisticated and extensive were the policies in use. One remarkable result came out with regard to a question on what company owners would consider to be the most important personnel domains for the coming years. All mention the same domains, albeit in a somewhat different order of importance: 1. establishing pay-rates and
secondary remuneration 2. availability of good personnel 3. training 4. the effects of government regulations and 5. job security. All of which can be tackled by implementing good personnel policies, they conclude. This result by Hornsby and Kurato (1990), is mirrored in other research as well. Kotey and Slade (2005) conclude that a blueprint for optimal organisation cannot be given for all companies. They expect that two trends will be discernable whenever companies grow in size: first an increasing division of labour, leading to more horizontal and vertical differentiation and secondly that this differentiation will first increase rapidly and then decline in speed. They therefore also divided their research population of 371 companies into three ‘size classes’; micro, 0-5 employees, small: 5-29 and medium sized: 30-100. Again it proved that the formalisation of HR increases with size and that the critical ‘turning point’ is at about 20 employees. From that moment on informal recruiting, but also direct supervision and other direct management styles become inefficient. The owner starts to get overburdened and has to delegate tasks to other managers. Yet in another study, Kok et al (2003) looked into the formalisation of HR practices in Dutch SME’s. They looked at companies with up to 500 employees and tried to establish which context variables explain HR formalisation. The variables included in the research were size, the existence of a business plan, export orientation, whether the company was a franchise organisation or not, if it was a family owned business and the level of union representation in the company. They further looked at the existence of an HR function, at selection procedures, reward systems, training and development programmes and appraisal systems. Again it proved that with increasing size the formalisation of HR practices increases but as soon as the context variables are taken into account, 50% of the difference in formalisation evaporates. From the empirical evidence presented so far we conclude that indeed size of the company
1381
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
should be an important predictor of HR sophistication and the formalisation of HR practices. Size not only seems to predict the complexity of the organisation structure, but also seems to provide a justification for applying more standardised jobs, thereby providing an opportunity for standardising important elements of HR policies, such as recruitment and selection, remuneration and appraisal schemes, training and development. Growth in size would lead to more complex structures and might well overstretch the capabilities of the owner/manager, forcing him to delegate responsibilities to subordinates or other managers. Indeed even the existence of a separate HR function or department seems to be dependent on size, and once such a position is created, the formalisation of HR practices might increase because of that.
Strategy and SME’s A literature overview and a research model for “researching personnel management in Dutch Small and Medium Sized Enterprises” were presented by van Riemsdijk et al. (2005). They concluded that especially two contingencies were the most important: company size and strategy. According to them the empirical evidence of de Kok et al. (2003) showed that size should be an important predictor of HRM sophistication and the formalization of HRM practices. On the other hand they argue that strategy can be relevant for SME’s and their HRM. The strategic choices of the so called “dominant coalition”, in SME’s the owner or director, are very important for the way an organization is run. In this case the strategy has influence on the way HRM is fulfilled in SME’s. Based on previous research van Riemsdijk et al. (2005) conclude that SME’s strategic orientation can be based on the Miles & Snow typology (Miles & Snow, 1978; in van Riemsdijk et al., 2005). This typology distinguishes defenders, prospectors, analysers, and reactors. A defender tries to do the best job possible in its area of expertise, it offers a relatively stable set of services to defined mar-
1382
kets. A prospector frequently changes its products and services, continually trying to be first in the market. An analyser unites the characteristics of defenders and prospectors, it tries to selectively move into new areas but at the same time tries to maintain a relatively stable base of products. A reactor is in the opinion of Miles & Snow ‘the rest’, they do not have a consistent strategy. Unfortunately van Riemsdijk et al. (2005) were not able to operationalize these typologies of strategy in a proper way into their study at SME’s. This was caused by the fact that the used typologies were too difficult for their research population; the operationalization of the business strategy needed more attention. Next to the typology of Miles & Snow other typologies for the strategic orientation of SME’s are used in research. The typology of Porter is also researched at SME’s (Gibcus & Kemp, 2003). Porter distinguishes three generic strategies: cost leadership, differentiation, and focus (cost focus & differentiation focus). With a cost leadership an organization tries to become the low-cost producer in its market. They must find and exploit all sources of cost advantage. With differentiation an organization tries to be unique in some dimensions which are highly valued by the customers. When an organization chooses for a focus strategy it focuses on the specific group / segment and creates a strategy which serves the customers optimally to serving them to the exclusion of others. The organizations that do not fit with one of these three generic strategies are called ‘stuck in the middle’. Gibcus & Kemp (2003) conducted research in Dutch SME’s, in which SME’s are defined as organizations with up to 100 employees. For the data collection they used the so-called ‘EIM SME panel’. By computer assisted telephone interviews about 2000 organizations were approached. They concluded that only one out of five SME’s have a written strategy and that five distinct strategies came forward: innovation differentiation strategy, marketing differentiation strategy, service
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
differentiation strategy, process differentiation strategy and cost-leadership strategy. These are all based on the typology of Porter but slightly adjusted for SME’s. Especially differentiation plays a important role because cost leadership is mainly possible with large-scale production. Next to those conclusions they point to further research in the combination of the typology of Miles & Snow and the typology of Porter. Based on de Kok (2003) we add growth strategy to the distinguished strategies by Gibcus & Kemp (2003), and van Riemsdijk et al. (2005). De Kok (2003) proposed, based on the behavioural and strategic contingency perspectives and other empirical research (Lengnick and Hall & Lengnick and Hall, 1988; in de Kok, 2003), that growth orientation is an important contextual determinant for the existence of HRM practices. The availability of a business plan or strategic plan can be interpreted as a characteristic of organizations with a relatively long planning horizon. The presence of a business plan can be used to indicate whether the goals and strategies are made explicit. This business plan may be seen as an indicator for enterprises that have a relatively high degree of formalization. Therefore de Kok et al. (2003) argue that organizations with a business plan or a strategic plan (or strategy) have more formal HRM practices and are more likely to have an HRM department or HRM manager. We assume that besides the existence of a business / strategic plan, the communication about the strategy is also very important. It is important that employees are aware of the existence of the strategy. Based on the literature about SME’s we expect that the characteristics / content of HRM in medium sized organizations are determined by the size and the strategy of the organization.
authors of those articles. They found 83 articles in several journals about general management, entrepreneurship and human resource management, as well as several scholarly book chapters in this area. They then coded each article to determine if it was theoretical or empirical and the size and life cycle stage of the organizations it discussed. In the end 37 articles survived their evaluation. Their paper reviews known HRM practices in SME’s. Cardon and Stevens (2004) and others (e.g., Heneman and Tansky, 2002) have stated that the organizations ability to address the challenges faced by small and medium sized organizations depends on organizational approaches to staffing, compensation, training and development, performance management, organizational change, and labour relations. The ability to cope with these challenges influences the organizational effectiveness and survival. From the literature research about HRM in SME’s we can observe some similarities. When looking at these six mentioned practices, we can conclude that these are almost similar to the categorization of de Kok et al. (2003); recruitment, selection, compensation, training and development, and appraisal. Others like van Riemsdijk et al. (2005) consider 8 traditional HRM fields that are commonly used in a questionnaire research in SME’s: HRM planning, Recruitment & Selection, Training & Development, Performance evaluation, Rewarding & Remuneration, Career Development, Employer-Employee relations, and Sickness policies. Van Riemsdijk et al. (2005) choose these HRM fields based on large organizations. In the following overview we will focus on the six HRM practices in SME’s according to Cardon & Stevens (2004).
HRM Practices in SME’s
Staffing
Cardon and Stevens (2004) sought out scholarly articles concerning human resource practices in entrepreneurial organizations, as defined by the
Staffing concerns recruiting, selection and hiring of employees. According to Hornsby & Kuratko (1990) staffing is very important for SME’s, and
1383
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
as Heneman & Berkley (1999) argue, may even be the key component of overall effective management of organizational human resources (In: Cardon & Stevens, 2004). But staffing can be problematic due to several reasons. Limited financial and material resources (In Cardon & Stevens, 2004: Hannan & Freeman, 1984), lack of legitimacy as an employer-of-choice (In Cardon & Stevens, 2004: Williamson, 2000), and the high numbers of jobs where employees typically perform multiple roles with unclear boundaries and job responsibilities (In Cardon & Stevens, 2004: May, 1997) are reasons why recruiting is problematic. The person-organization fit is an important selection criteria (In Cardon & Stevens, 2004: Chatman, 1991). Norms, values and beliefs of the organizations and applicants are considered as important (In Cardon & Stevens, 2004: Williamson 2000, Williamson et al., 2002) Also the focus is more on the match of applicants competences to general organizational needs, than on the match of applicants competencies to specific job requirements (Heneman et al., 2000). Strategies about staffing are often ad hoc in SME’s (In Cardon & Stevens, 2004: Heneman & Berkley, 1999). Often the general manager is responsible for HRM, rather than HR professionals (In Cardon & Stevens, 2004: Longenecker, Moore, and Petty, 1994). A new trend about staffing concerns the outsourcing of HR activities to professional employer organizations (PEO). Organizations also use contingent labour brokers (e.g. for temporary workers) to recruit employees (In Cardon & Stevens, 2004: Cardon, 2003). From the literature can be concluded that there are differences between SME’s and large organizations in the practice of staffing. SME’s seem to be more ad hoc in staffing, the fit between employee and organization (culture) is the most important (more than between the employee and the function). Jobs in SME’s also have less typical boundaries, and are more multitasking. Staffing seems to be less formal, but is certainly very important in SME’s.
1384
Compensation Compensation involves all the decisions an organization makes concerning payment of its workers, including pay levels, pay mixes, pay structure and pay raises (In Cardon & Stevens, 2004: Balkin & Logan, 1988). Case study evidence suggests, although more research is necessary (Heneman et al., 2000), that SME’s are more likely to view compensation from a total rewards perspective than are large companies (e.g. In Cardon & Stevens, 2004: Nelson, 1994). With a total rewards perspective compensation is seen in addition to monetary rewards in the form of base pay and incentives includes psychological rewards, learning opportunities and recognition (In Cardon & Stevens, 2004: Graham et al., 2002; Heneman et al., 2000)) SME’s differ from large organizations in compensation. Especially in pay mix differences occur. In SME’s there is more variable at-risk pay in the mix (In Cardon & Stevens, 2004: Balkin & Gomez-Mejia, 1984; Barringer, Jones, & Lewis, 1988; Milkovich, Gerhart, & Hannon, 1991). The pay mix also changes over the life cycle of the organization, when an organization moves from a growth stage to a mature stage of its products (In Cardon & Stevens, 2004: Balkin & GomezMejia, 1984). Because SME’s have usually flat organizational structures with few levels of management and tend to treat employees in an egalitarian way with regard to compensation and rewards, pay structure between large organizations and SME’s may differ (In Cardon & Stevens, 2004: Graham et al., 2002). According to Balkin & Logan (1988) small organizations keep traditional hierarchical distinctions to a minimum so that rewards are not indicative of status difference among employees. We assume that through the lack of status difference by rewards within SME’s, SME’s will not pay much attention to formal compensation (In: Cardon & Stevens, 2004).
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
Pay raises are also different between SME’s and large organizations. Large organizations are common with automatic annual salary increases, whereas SME’s usually cannot afford these pay raises because of high uncertainty of sales or profits (In Cardon & Stevens, 2004: Balkin & Logan, 1988). Because of gaps in education of employees (due to the changing role of employees in the organization and changing organizational and market conditions), SME’s often provide educational benefits as a sort of compensation (Balkin and Logan, 1988, In Cardon & Stevens, 2004).
Training & Development In SME’s unstructured training, informal job instruction, and socialization are very important for the training processes (In Cardon & Stevens, 2004: Chao, 1997). These can be seen as a substitute for formal training. Nowadays the process of socialization (through informal and formal training) is seen as very important (In Cardon & Stevens, 2004: Rollag, 2002; Rollag & Cardon, 2003). Due to a faster and more extensive inclusion newcomers might be more productive and satisfied with their job than newcomers in large organizations. For SME’s, the costs of formal training programs and the time spent away from productive work are important considerations for determining what training opportunities to provide to workers, as resources of both money and worker time are constrained (Banks et al., 1987). Next to that sources of formal training are more restricted for SME’s than for large organizations. Trade associations, short college seminars, and in-house training are the main sources for employee development within SME’s. An other aspect which increases the importance of training in SME’s are the changing roles and expectations of employees within the organization. Due to changing organizational and environmental conditions multitasking is an important aspect
of employees (In Cardon & Stevens, 2004: May, 1997).
Performance Management In the literature topics about performance management (seen as performance evaluation processes, disciplinary procedures, or dismissals of workers (Cardon & Stevens, 2004)) in SME’s are very scarce. A reason for this can be the rarity of formalized procedures in SME’s for performance management. Formal appraisals are usually not done in SME’s because of a relative lack of concern of venture founders on downstream (post-start-up) management issues, particularly those with negative implications such as workers not performing well or the business needing to lay off workers (Cardon & Stevens, 2004). Employee issues are often handled arbitrarily rather than consistently (In Cardon & Stevens, 2004: Verser, 1987). But this arbitrary behaviour is not seen as a problem for the productivity or employees morale. Managers of young and small organizations could interpret an informal ongoing communication and feedback related to a highly formalized performance appraisal procedure as preferable, from philosophical point of view (Cardon & Stevens, 2004).
Organizational Change As stated earlier SME’s experience a lot of changes due to changes in the environment. Chu & Sui (2001) found that SME’s have a harder time coping with economic downturns than do large organizations. A research in a large sample of high-technology start ups in Silicon Valley from the Stanford Project on Emerging Companies (SPEC), which tracked the key organizational and HRM challenges of these small companies, shows the implications of changes in particular sets of HRM practice for employees and organizations (Baron & Hannan,
1385
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
2002). Compared with companies who did not change, the changes in the sets of HRM practices negatively influenced the employee turnover and financial performance of the organizations. Next to that the likelihood of failure of the organizations increased (in relation to organizations who did not change sets of HRM practices). So changes in sets of HRM practices (for technology organizations) can have significant negative implications at an early point in time.
Human Resource Information Systems and SME’s Table 2 does not implicate that the HRM practices in SME’s have the same benefits as the e-HRM tools in large organizations. It are only possible benefits. After this table we will discuss the realistic possible benefits for using e-HRM tools in medium sized organizations. Table 2 shows some contradictory data. For example, generating figures and statistics for Performance management is due to the informal process not preferred in SME’s. The process of socialization at Training & Development is seen
as important (face to face), but this is not achieved by the use of e-HRM tools. The contradictory data is caused by the fact that the HRM practices are based on literature about SME’s and the e-HRM tools are based on literature about large organizations. This is why there is not much congruence between the specialties of the HRM practices and the expected benefits of the e-HRM tools. It can be concluded that most of the characteristics of HRM practices in SME’s are about cost reduction and efficiency. This is similar to the goals of SME’s for using e-HRM. Implementing e-HRM tools means investing, and this can collide with the goals of SME’s for using e-HRM. Investments will not take place if it is not certain that cost reductions or improved efficiencies are reached.
METHODOLOGY Main condition for the selection of case studies was that the organizations were small and/or medium sized in the view of Unit 4 Agresso. This resulted
Table 1. HRM practices in SME’s (Adapted from Cardon and Stevens, 2004) HRM practice
Specialties
1. Staffing
• Key component of overall HRM management • Person-organization fit is an important selection criteria • Strategies are often ad hoc • Problematic due to lack of legitimacy, multiple roles & unclear boundaries, limited financial and material resources
2. Compensation
• View compensation from a total rewards perspective • Often variable at-risk pay in the mix • Pay mix changes over the life cycle of the organization • Rewards not indicative of status difference • (Annual) pay raises not common • Often educational benefits
3. Training & Development
• Unstructured training, informal job instruction and socialization are very important • Process of socialization important • Costs and time important considerations of training opportunities • Multitasking relevant
4. Performance management
• Formal appraisal usually not done • Employee issues often handled arbitrarily
5. Organizational change
• SME’s have a harder time coping with economic downturns than large organizations • Changes in sets of HRM practices can have significant negative implications at an early point in time
1386
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
Table 2. Linking HRM practices at SME’s with e-HRM tools HRM practices in SME’s from literature about SME’s Staffing
Specialties HRM practices in SME’s
e-HRM tools in large organizations
Possible expected benefits from e-HRM from literature about large organizations
• Key component of overall effective HRM management • Person-organization fit is an important selection criteria • Strategies are often ad hoc • Problematic due to lack of legitimacy, multiple roles & unclear boundaries, limited financial and material resources
e-recruitment
• Creating brand identity • Increasing employee retention levels • Increasing efficiency recruitment process • Decreasing administrative burdens in recruitment process • Increasing organizational attractiveness
e-selection
• Decreasing administrative paper burden (used for first selection) • Minimizing costs and maximizing utilization of the human capital • Sustainability
Compensation
• View compensation from a total rewards perspective • Often variable at-risk pay in the mix • Pay mix changes over the life cycle of the organization • Rewards not indicative of status difference • (Annual) pay raises not common. • Often educational benefits
e-compensation
• Effectively designing, administering and communicating compensation programs • Enabling to look at external payments • Analyzing market salary data • Streamlining bureaucratic tasks • Greater access to knowledge management databases • Internal information process quicker
Performance management
• Formal appraisal usually not done • Employee issues often handled arbitrarily
e-performance
• Generating figures and statistics about performance more easily • Enlarging span of control for managers • Facilitating process of writing reviews and generating feedback
Training & Development
• Unstructured training, informal job instruction and socialization are very important • Process of socialization important • Costs and time important considerations of training opportunities • Multitasking relevant
e-learning
• Delivering information about learning, knowledge, and skills • Enabling web based training • More flexible & cost efficient than normal training & development • ‘Just in time’ • Control over learning
Table 3. Overview of the cases Organization
Existing HRIS and/or e-HRM tools
Size
Branch
Unidek
Unit 4 Personeel & Salaris
300 employees, 295 FTE
Construction
Van Merksteijn
Unit 4 Personeel & Salaris
500 employees, 500 FTE
Metal
Unit 4 Agresso
Unit 4 Personeel & Salaris e-Recruitment e-Learning e-Performance management
71 employees, 71 FTE
IT
Zodiac Zoos
Unit 4 Personeel & Salaris Unit 4 Personeel & Salaris Webvastlegging
250 employees, 150 FTE
Animal parks
1387
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
in the fact that the size of the organizations was between 50 and 1000 employees. The distinction between the two groups of cases is due to the fact that Unit 4 Agresso offers two different software packages to the market. They have a software package for “general” organizations and a specific software package for the healthcare sector.
Interviews Conducted Table 4 shows a list of the organizations and the function of the interviewees. In total we did 15 interviews. We did 11 interviews with one person and 4 with two persons. In total we have spoken with 19 persons. Nine of them had a function as an HRM specialist and ten as a line manager. However, most of the time they are also involved with HRM. The interviews took approximately 45 minutes to an hour, so in total we did approximately 15 hours of interviewing. All the interviews were recorded with an voice recorder.
DESCRIPTION OF SOFTWARE UNIT 4 AGRESSO Unit 4 Personeel & Salaris is a combination of payroll and HRM; two different modules but together one integrated package, using the same basic data. The software keeps all the legislations, CAO’s e.g. up to date. Unit 4 Personeel & Salaris is designed on the world standard of Microsoft Windows. Table 4. Interviews conducted Organization
Function of the interviewee
Unidek BV
- Personeelsfunctionaris (HRM specialist)
Van Merksteijn
- Personeel / salaris functionaris (HRM specialist)
Unit 4 Agresso
- Managers HRM (HRM specialist) - Hoofd salarisadministratie (Line manager) - Personeelszaken (HRM specialist)
Zodiac Zoos
- Medewerksters P&O (HRM specialist) - Administrateurs (Line manager)
1388
This software has a modular structure. This makes it possible to create a specific software package the client wishes for. It is also possible for the client to start with a basic structure and extend this in the future. Functional modules are: •)>> •)>> •)>> •)>> •)>> •)>> •)>> •)>>
Unit 4 Salaris Unit 4 Personeel Unit 4 Web (Web enabled Payroll and HRM registration) Business Intelligence (reporting tool) Document Archive (digital personnel dossier) VerzuimSignaal (fully Web-enabled absence controlling system) Import External registration (Microsoft Excel)
General Functionalities Information scout for management information: Filter, sort and group the information. With a link transferable to Microsoft Word or Excel. Multiple employers: Process multiple administrations (think about a holding). Applicants can be processed in a separate system. When he / she becomes a definitive employee, they are transferable to the other employees. Design own rubrics and fields: Save extra information. Signal function: Be sure that the user is noticed on time about for example birthdays, end of contracts and so on. Information exchange: Exchange information with external parties through digital ways. Business Intelligence: Direct access to an open database. Decentralized system / external processing: Get the information from the employees by Excel or by the Internet. Mutations are digitally processed. •)>> •)>>
HRM functionalities Basic employee data
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
•)>> •)>> •)>> •)>> •)>> •)>>
•)>> •)>> •)>> •)>> •)>>
From vacancy till resignation ‘First day of work’ (reporting to Tax department) Contracts Payment agreements Extra employee data (partners, children, important addresses) Competence management (training, job history, functioning, performance management) Sickness information Lease data Document archive Salary mutations Automatic signalling if contracts end, birthdays e.g.
FINDINGS From the interviews and/or document analysis it appeared that all the general SME’s do have an HRM department. If we look at the size of the organisations 2 of the 4 organizations fit into our definition of a SME (organizations with 250 employees). Unidek and Van Merksteijn have more than 250 employees. This non-fit can be caused by the fact that Unit 4 Agresso selected the cases for this research and in their view these 2 organizations can be categorized as a SME. All the organizations do have an HRM department, so all the organizations have a specialized HRM function. This is in accordance with our expectation that organizations with more than 50 employees do have specialised HRM func-
tion. If we look at FTE’s at the HRM department we see that all the 4 organizations have an HRM department between 1 and 2,65 FTE’s. But if we compare this size of the HRM department with the overall size of the organizations, we see that for example van Merksteijn has an HRM department with 1 FTE for 500 FTE and Unidek has an HRM department with 2,65 FTE for 296 FTE. All the 4 organizations use Unit 4 Personeel & Salaris as an HRIS. Zodiac Zoos and Unit 4 Agresso have additional e-HRM tools like Unit 4 Personeel & Salaris Webvastlegging, e-recruitment, e-learning, e-compensation and e-performance management.
Strategy, e-HRM Goals Three of the four organisations do not have a formal business plan and a formal HRM strategy. Only Unit 4 Agresso does have a business plan and a formal HRM strategy. The strategy is only communicated by Unit 4 Agresso. Unidek, van Merksteijn and Zodiac Zoos do not communicate the strategy through the organization with a formal policy. If we look at the types of the strategy of the 4 organizations we see that 3 organizations have (partly) a marketing differentiation strategy. At Zodiac Zoos and Unit 4 Agresso we see a growth strategy, whereas Unidek has also an innovation differentiation strategy (See table 5). The fact that three of the four organizations have a marketing differentiation strategy can be the signal that this strategy seems to be most important strategy for a SME to survive. At the two
Table 5. Overviews of strategies in the empirical sample Organisation Unidek
Business plan
Formal HRM strategy
Clearness of strategy
Characteristics of strategy
No
No
Not communicated
marketing differentiation / innovation differentiation strategy
Van Merksteijn
No
No
Not communicated
Marketing differentiation strategy
Zodiac Zoos
No
No
Not communicated
Growth strategy
Unit 4 Agresso Oost-Nederland
Yes
Yes
Clearly communicated
Marketing differentiation / growth strategy
1389
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
smallest organizations of our research (Zodiac Zoos & Unit 4 Agresso Oost-Nederland) we see a growth strategy. This in line with de Kok (2003) who said that this is an important strategy for small organizations. If we look at the mentioned goals for using or starting to use e-HRM we see that most of the goals are related to the quality of the information (making information more visible, all the information located at one central system, clear overview of the available information, clearness of the information, every user has the exact same information). Next to that we see that the goals of e-HRM are related to time reduction / cost reduction (Extra administrative support with Staffing (Recruitment & Selection), saving time, develop a more effective and efficient HRM process).
Use of HRIS At all the organizations Unit 4 Personeel & Salaris is used as an HRIS at an operational level. The software package is used for administrative tasks. This is mainly caused by the fact that the software package supports functionalities at an operational level. The e-HRM tools at Zodiac Zoos and Unit 4 Agresso Oost-Nederland are also used at an operational level. We can speak of operational e-HRM at these organizations. The type of operational e-HRM is in line with the expectation that at medium sized organizations e-HRM is only used at an operational (administrative) level. If we look at the people who are involved with the use of e-HRM we expected that in medium sized organisations the following people are involved: the line management, the employees, and the HRM specialists. We do not expect that suppliers / external users are involved. At unit 4 Agresso we see that this expectation is confirmed, whereas at Zodiac it is partly confirmed. Here the personnel department and line managers are involved with the use of e-HRM.
1390
IT Acceptance If we look at the answers of the interviewees with regard to the usefulness of the software of Unit 4 we can see that the software is perceived as useful at Unidek, Zodiac Zoos and Unit 4 Agresso OostNederland. The answers of the interviewees with regard to the ease-of-use of the software of unit 4 Agresso show that the software is not perceived as easy to use at Unidek and Unit 4 Agresso. At Zodiac Zoos the software is perceived as easy to use. In the extra questions about IT acceptance we asked the HRM specialist about his perceptions about the usefulness, ease-of-use and subjective norm for the use of Unit 4 Personeel & Salaris. As stated earlier van Merksteijn does not use the HRM functionalities in Unit 4 Personeel & Salaris intensively. Therefore the questions were / could not be answered in a way that we could say something about the IT acceptance. Because of this we can not say something about the perceived usefulness, perceived ease-ofuse and subjective norm with regard to Unit 4 Personeel & Salaris. The indicators of perceived usefulness show that the software of Unit 4 Agresso is useful for reasons like time reduction, quickly and easily generated information / overviews out of one source and quality of the information. The indicators of perceived ease of use at Unidek and Unit 4 Agresso show that the software of Unit 4 Agresso is not easy to use because of the experience that is needed to use the software, the software is not flexible and you have to make too many steps to use a functionality. On the other side at Zodiac Zoos they perceive the software as easy to use for reasons like well organized structure of the software, display of different fonts and mutations can be processed in different ways and in the way they want. This can be caused by the fact that at Zodiac they use next to Unit 4 Personeel & Salaris also Unit 4 Personeel & Salaris Webvastlegging.
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
At all the organizations the use of the software is stimulated. The interviewees indicated that they perceive the influence of other people as stimulating and that the opinion of other people is important for the use of the software of Unit4 Agresso. So at all the organizations subjective norm is important and plays a (positive) role.
DISCUSSION Based on the interviews we can answer the formulated research questions for the four organisation. If we look at the types of the strategy of the 4 organizations we see that 3 organizations have (partly) a marketing differentiation strategy. At Zodiac Zoos and Unit 4 Agresso we see a growth strategy, whereas Unidek has also an innovation differentiation strategy. All the organizations do have an HRM department, so all the organizations have a specialized HRM function. If we look at the HRM practices we see that three of the four organisations have HRM practices. Only at one organization we did not see any HRM practices. At all the organizations Unit 4 Personeel & Salaris is used as an HRIS at an operational level. The software package is used for administrative tasks. This is mainly caused by the fact that the software package supports only functionalities at an operational level. The e-HRM tools at Zodiac Zoos and Unit 4 Agresso Oost-Nederland are also used at an operational level. We can speak of operational e-HRM at these organizations. If we look at the people who are involved with the use of e-HRM we expected that in medium sized organisations the following people are involved: the line management, the employees, and the HRM specialists. We do not expect that suppliers / external users are involved. At unit 4 Agresso we see that this expectation is confirmed, whereas at Zodiac it is partly confirmed. Here the personnel department and line managers are involved with the use of e-HRM.
According to the interviewees the 4 organizations have the following goals for using e-HRM: •)>> •)>> •)>> •)>> •)>> •)>> •)>> •)>> •)>>
Extra administrative support with Staffing (Recruitment & Selection); Making the job benefits more visible. Saving time Making the information more visible. Develop a more effective and efficient HRM process All the information is located at one central system; Clear overview of the available information; Clearness of the information; Every user has the exact same information.
If we look at the answers with regard to the usefulness of software of Unit 4 Agresso we can see that this software is perceived as useful. The interviewees at the 4 organizations indicated that the software is useful because of: •)>> •)>> •)>> •)>> •)>>
Reliable information Reducing administrative tasks Quickly and easily generated information / overviews out of one source Time reduction All the information can be found at one location.
The answers with regard to the ease-of-use of the software of Unit 4 Agresso show that this software is not perceived as easy to use for 2 organizations. The interviewees at the 2 organizations indicated that the software is not easy to use because of: •)>> •)>> •)>>
Experience needed to use the software easily Software is not flexible (too much standardized) Too many steps to use a functionality
1391
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
At one organization the interviewees indicated that the software is easy to use because of: •)>> •)>> •)>>
Well organized structure of the software Display of different fonts Mutations can be processed in different ways and in the way they want
The answers on the questions about subjective norm show that at all the 4 organizations the interviewees perceive the influence of other people as stimulating and that the opinion of other people is important for the use of the software of Unit4 Agresso. Zodiac Zoos and Unit 4 Agresso Oost-Nederland, who both uses e-HRM tools, perceive that the use of e-HRM helps them to make HRM more effective. In both cases time reduction is seen as the main reason for this. Overall we can say that from the interviews it appeared that the use of HRIS / e-HRM at an operational level is perceived as useful for three organizations. The software is not perceived as easy to use at 2 organizations, whereas one organization perceives the software as easy to use. For one organization we were not able to say something about the usefulness and ease-of-use of the software. In relation to the HRM effectiveness we can conclude that HRM is perceived as effective at all the 4 organizations, because goals are perceived as attained and/or the perceptions about HRM are positive. We can not conclude on which factors these (positive) perceptions are based. Two organizations, who both use e-HRM tools, perceive that the use of e-HRM helps them to make HRM more effective. In both cases time reduction is seen as the main reason for this. There are a couple of limitations for this research at the four organizations. First, there is a non-fit of the size of two of the four organizations with our definition of a SME. Second, we only interviewed a limited number of persons (1-3 persons), so perceptions about IT acceptance and HRM effectiveness are only based on 1-3 persons in the organization.
1392
CONCLUSION Overall we can say that from the interviews it appeared that the use of HRIS / e-HRM at an operational level is perceived as useful for all organizations. The software is not perceived as easy to use at five organizations, whereas one organization perceives the software as easy to use. At one organisation the ease-of-use is partly perceived as positive. For one organization we were not able to say something about the usefulness and ease-of-use of the software. From the organizations who use e-HRM, four organizations perceive that the use of e-HRM helps them to make HRM more effective. At two organizations time reduction is seen as the main reason for this and at two organizations quality of information is seen at the main reasons for this. At the other two organizations is unknown how the use of e-HRM makes HRM more effective. Overall we can conclude that in this research the use of e-HRM tools is perceived as useful, whereas the use of e-HRM tools is not perceived as easy to use. The organizations perceive that the use of e-HRM helps them to make HRM more effective. This research has some limitations. The first limitation is the number of cases. We have only done research in 8 organizations. This is just a small fraction of all the organizations in the Netherlands. The second limitation is the numbers of interviewees. We have only interviewed 19 persons. The perceptions about the use of e-HRM in the organizations are based on 1-3 persons of a organization. The third limitation is the way of selecting the organization. We have not selected the organizations. The organizations were selected by Unit 4 Agresso, because the organizations are customers of them and use a software package of Unit 4 Agresso. The fourth limitation is the fact that not all the organizations fit into the definition of a SME.
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
This non-fit can be caused by the fact that Unit 4 Agresso selected the cases for this research and in their view these organizations can be categorized as a SME. The fifth limitation is the fact that not all the organizations use the software (although the same) as e-HRM. Some organizations use it as an HRIS.
Further Research Due to these limitations further research is needed. For example research which captures only organizations that fit into the definition of a SME, more in-depth research (larger number of interviewees per organization). Our research was an exploratory research. Further research could use this research for a descriptive or even explanatory research. Next to our qualitative research, in the future more quantitative research would be useful to acquire more knowledge about e-HRM.
ACKNOWLEDGMENT The authors gratefully acknowledge helpful comments of Maarten van Riemsdijk and Huub Ruël on the early versions of this work.
REFERENCES Adam, H., & Berg, R. van den (2001). E-HRM, inspelen op veranderende organisaties en medewerkers. Schoonhoven: Academic service. Balkin, D. B., & Logan, J. W. (1988). Reward policies that support entrepreneurship. Compensation and Benefits Review, 18–25. Ball, K. S. (2001). The use of human resource information systems: a survey. Personnel Review, 30(6), 677–693. doi:10.1108/EUM0000000005979
Banks, M. C., Bures, A. L., & Champion, D. L. (1987). Decision making factors in small business: Training and development. Journal of Small Business Management, 25(1), 19–25. Baron, J. N., & Hannan, M. T. (2002). Organizational blueprints for success in high-tech start-ups: Lessons from the Stanford project on emerging companies. California Management Review, 44(3), 8–36. Benbasat, I., Goldstein, D. K., & Mead, M. (1987). The case research strategy in Studies of information systems. MIS Quarterly, (3): 369–386. doi:10.2307/248684 Boselie, P., Paauwe, J., & Jansen, P. (2001). Human Resource Management and performance; lessons from the Netherlands. International Journal of Human Resource Management, 12(7), 1107–1125. doi:10.1080/09585190110068331 Bowen, D. E., & Ostroff, C. (2004). Understanding HRM-Firm Performance linkages: The role of the “strength” of the HRM system. Academy of Management Review, 29(2), 203–221. Brown, S. A., Massey, A. P., Montoya-Weiss, M. M., & Burkman, J. R. (2002). Do I really have to? User acceptance of mandated technology. European Journal of Information Systems, 11, 283–295. doi:10.1057/palgrave.ejis.3000438 Cappelli, P. (2001). Making the most of on line recruiting. Harvard Business Review, 79(2), 139–146. Cardon, M. S., & Stevens, C. E. (2004). Managing human resources in small organizations: what do we know? Human Resource Management Review, 14, 295–323. doi:10.1016/j.hrmr.2004.06.001 Cardy, R. L., & Miller, J. S. (2005). eHR and Performance Management. In: H. G. Gueutal & D. L. Stone (Eds.), The Brave New World of eHR. Human Resources Management in the Digital Age. San Francisco: Jossey-Bass.
1393
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
Cascio, W. F. (2000). Managing a virtual workplace. The Academy of Management Executive, 14(3), 81–90. Cedar (2001). Cedar 2001 Human resources Self Service / Portal Survey. Fourth Annual Survey. Baltimore: Cedar. Chu, P., & Sui, W. S. (2001). Coping with the Asian economic crisis: The rightsizing strategies of small- and medium-sized enterprises. International Journal of Human Resource Management, 12(5), 845–858. doi:10.1080/09585190110047866 Cober, R. T., Brown, D. J., Levy, P. E., Keeping, L. M., & Cober, A. L. (2003). Organizational websites: Website content and style as determinants of organizational attraction. International Journal of Selection and Assessment, 11, 158–169. doi:10.1111/1468-2389.00239 Cober, R. T., Douglas, J. B., & Levy, P. E. (2004). Form, Content and Function: An evaluative methodology for corporate employment web sites. Human Resource Management, Summer / Fall, 43(2/3), 201-218. Wiley Periodicals, Inc. Collins, H. (2001). Corporate portals: Revolutionizing information access to increase productivity and drive the bottom line. New York: AMA-COM. Compeer, N., Smolders, M., & de Kok, J. (2005), Scale effects in HRM research. A discussion of current HRM research from an SME perspective. EIM, Zoetermeer, the Netherlands. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Ouarterly, 13(3), 319–339. doi:10.2307/249008 Davis, F. D., Bagozzi, R. P., & Warshavi, P. R. (1989). User acceptance of computer technology: a comparison of two theoretical models . Management Science, 35(8), 982–1002. doi:10.1287/ mnsc.35.8.982
1394
De Kok, J. M. P. (2003). Human Resource Management within Small and Medium-Sized Enterprises, Tinbergen Institute Research Series, 313. Amsterdam: Thela Thesis. de Kok, J. M. P., Uhlaner, L. M., & Thurik, A. R. (2003). Human Resource Management with small firms; facts and explanations. Report Series, Research in Management. Erasmus Research Institute of Management. de Kok, J. M. P., Uhlaner, L. M., & Thurik, A. R. (2006). Professional HRM practices in family owned-managed enterprises. Journal of Small Business Management, 44(3), 441–460. doi:10.1111/j.1540-627X.2006.00181.x Duhlebohn, J. H., & Marler, J. H. (2005). E-compensation: The potential to transform practice? In H. G. Gueutal & D. L. Stone (Eds.), The Brave New World of eHR. Human Resources Management in the Digital Age. San Francisco: Jossey-Bass. European Commission. (2000). The European Observatory for SME’s. Sixth Report, Office for Official Publications of the European Communities, Luxembourg. European Commission. (2004). Highlights from the 2003 Observatory, Observatory of European SMEs 2003 no. 8, Office for Official Publications of the European Communities, Luxembourg. Firestone, J. M. (2003). Enterprise information portals and knowledge management. Boston: Butterworth-Heinemann Flanagan, D. J., & Deshpande, S. P. (1996). Top management’s perceptions of changes in HRM practices after union elections in small firms. Journal of Small Business Management, 23–34. Galanaki, E. (2002). The decision to recruit on line: A descriptive study. Career D evelopm ent Inter national , 243–251. doi:10.1108/13620430210431325
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
Gibcus, P., & Kemp, R. G. M. (2003). Strategy and small firm performance. EIM, Zoetermeer, the Netherlands. Grensing-Pophal, L. (2001). HR and the corporate intranet: Beyond “brochureware”. SHRM white paper. Retrieved, May 9, 2003, from www. shrm.org/hrresources/whitepapers_publisched/ CMS_000212.asp. Guest, D. E., & Peccei, R. (1994). The nature and causes of effective Human Resource Management. British Journal of Industrial Relations, 32(2), 219–242. doi:10.1111/j.1467-8543.1994. tb01042.x Hageman, J., & van Kleef, J. (2002). e-HRM is de hype voorbij (cited in March 2006). http://www. nvp-plaza.nl/e-hrm/e-HRMvisie&missie_files/ frame.htm. Hargie, O., & Tourish, D. (2002). Handbook of communication audits for organisations. Hove/ New York: Routledge Harrington, A. (2002, May 13). Can anyone build a better monster? Fortune, 145, 189–192. Heneman, H. G., & Berkley, R. A. (1999). Applicant attraction practices and outcomes among small businesses. Journal of Small Business Management, 53–74. Heneman, R. L., & Tansky, J. W. (2002). Human Resource Management models for entrepreneurial opportunity: Existing knowledge and new directions. In J. Katz & T. M. Welbourne (Eds.), Managing people in entrepreneurial organizations, 5, 55-82, Amsterdam: JAI Press Heneman, R. L., Tansky, J. W., & Camp, S. M. (2000). Human resource management practices in small and medium-sized enterprises: Unanswered questions and future research perspectives. Entrepreneurship Theory and Practice, (pp. 11–26).
Hogler, R. L., Henle, C., & Bernus, C. (1998). Internet recruiting and employment discrimination: a legal perspective. Human Resource Management Review, 8(2). doi:10.1016/S10534822(98)80002-8 Hornsby, J. S., & Kuratko, D. F. (1990). Human resource management in small business: Critical issues for the 1990s. Journal of Small Business Management, 9–18. Huselid, M. A. (1995). The impact of human resource management practices on turnover, productivity, and corporate financial performance. Academy of Management Journal, 38, 635–672. doi:10.2307/256741 Huselid, M. A. (2003). Editor’s note: Special issue on small and medium-sized enterprises: A call for more research. Human Resource Management, 42(4), 297. doi:10.1002/hrm.10090 Hwang, Y. (2004). Investigating enterprise systems adoption: uncertainty avoidance, intrinsic motivation, and the technology acceptance model. European Journal of Information Systems, 14, 150–161. doi:10.1057/palgrave.ejis.3000532 Ilgen, D. R., Fischer, C. D., & Taylor, M. S. (1979). Consequences of individual feedback on behaviour in organizations. The Journal of Applied Psychology, 64, 349–371. doi:10.1037/00219010.64.4.349 Kane, B., Crawford, J., & Grant, D. (1999). Barriers to effective HRM. International Journal of Manpower, 20(8), 494–515. doi:10.1108/01437729910302705 Karahanna, E., & Straub, D. W. (1999). The psychological origins of perceived usefulness and ease of use. Information & Management, 35, 237–250. doi:10.1016/S0378-7206(98)00096-2 Karrer, T., & Gardner, E. (2003). E-Performance Essentials. Retrived, April 10, 2006, http://www. learningcircuits.org/2003/dec2003/karrer.htmis.
1395
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
Keebler, T. J., & Rhodes, D. W. (2002). E-HR: becoming the “Path of least resistance”. Employment Relations Today, summer 2002, 57-66.
Moll, P. (1983). Should the third world have information technologies? IFLA Journal, 9(4), 297. doi:10.1177/034003528300900406
Kehoe, J. F., Dickter, D. N., Russell, D. P., & Sacco, J. M. (2005). E-Selection. In H.G. Gueutal & D. L. Stone (Eds.), The Brave New World of eHR. Human Resources Management in the Digital Age. San Francisco: Jossey-Bass.
Morris, M. G., & Venkatesh, V. (2000). Age differences in technology adoption decisions: Implications for a changing workforce. Personnel Psychology Inc., 53, 375–403. doi:10.1111/j.1744-6570.2000.tb00206.x
Klaas, B. S. McCClendon, J. and Gainey, T.W. (2000). Managing HR in the small and medium enterprise: The impact of professional employer organizations. Entrepreneurship: theorye and Practice, 25(1), pp. 107-124.
Murray, D. (2001). E-Learning for the Workplace: Creating Canada’s Lifelong Learners. The Conference Board of Canada, www.conferenceboard.ca.
Kotey, B., & Slade, P. (2005). Formal human resource management in small growing firms. Journal of Small Business Management, 43(1), 16–40. doi:10.1111/j.1540-627X.2004.00123.x Lai, V. S., & Li, H. (2004). Technology acceptance model for internet banking: an invariance analysis. Information & Management, 42(2005), 373-386 Lengnick-Hall, M., & Moritz, S. (2003). The impact of e-hr on the human resource management function. Journal of Labor Research, 14(3), 365–379. doi:10.1007/s12122-003-1001-6 Lepak, D. P., & Snell, S. A. (1998). Virtual HR: strategic human resource management in the 21st century. Human Resource Management Review, 8(3), 215–234. doi:10.1016/S10534822(98)90003-1 Little, B. L. (1986). The performance of personnel duties in small Louisiana firms: A research note. Journal of Small Business Management, (October): 66–71. McConnel, B. (2002, April). Companies lure job seekers in new ways: Corporate Web sites snare applicants on line . HR News, 1, 12. Meijaard, J., Mosselman, M., Frederiks, K. F., & Brand, M. J. (2002). Organisatietypen in het MKB. Zoetermeer: EIM.
1396
Nederland, M. K. B. (2006). Het midden- en kleinbedrijf in een oogopslag. Retrieved, August 22, 2006, from http://www.mkb.nl/Het_midden-_en_kleinbedrijf. Nederlandse Vereniging voor Personeelsmanagement en Organisatieontwikkeling. (2006). On-line resultaten van de NVP e-HRM enquete. Retrieved, March 2006, from http://www.nvp-plaza.nl/ documents/e-hrmresults2005.html Nickel, J., & Schaumberg, H. (2004). Electronic privacy, trust and self-disclosure in e-recruitment. [Vienna, Austria]. CHI, 2004(april), 24–29. Paré, G. (2004). Investigating informations systems with positivist case study research. Communications of the Associatoin for informations systems, 13, 233–264. Rousseau, D. M. (1995). Psychological Contracts in Organizations. Thousands Oaks, CA: Sage Ruël, H. J. M., Bondarouk, T., & Looise, J. C. (2004). E-HRM: innovation or irritation. An explorative empirical study in five large companies on web-based HRM. Management Review, 15(3), 364–380. Ruël, H. J. M., Bondarouk, T., & van der Velde, M. (2007). The contribution of e-HRM to HRM effectiveness. Employee Relations, 29(3), 280–291. doi:10.1108/01425450710741757
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
Ruël, H. J. M., de Leede, J., & Looise, J. C. (2002). ICT en het management van arbeidsrelaties; hoe zit het met de relatie? In R. Batenburg, J. Benders, N. Van den Heuvel, P. Leisink, & J. Onstenk, (Eds.), Arbeid en ICT in onderzoek. Utrecht: Lemma.
Teo, S. T. T., & Crawford, J. (2005). Indicators of Strategic HRM effectiveness: A case study of an Australian public sector agency during commercialization. Public Personnel Management, 34(1), 1–16.
Ruta, C. D. (2005, Spring). The application of change management theory to HR portal implementation in subsidiaries of multinational corporations. Human Resource Management, 44(1), 35–53. doi:10.1002/hrm.20039
Twynstra Gudde. (2003). e-HRM onderzoek 2002/2003. Amersfoort: Twynstra Gudde.
Saadé, R., & Bahli, B. (2004). The impact of cognitive absorption on perceived usefulness and perceive ease of use in on-line learning: an extension of the technology acceptance model. Information & Management, 42, 317–327. doi:10.1016/j. im.2003.12.013 Sels, L., de Winne, S., Delmotte, J., Maes, J., Faems, D., & Forrier, A. (2006). Linking HRM and Small Business Performance: An Examination of the Impact of HRM Intensity on the Productivity and Financial Performance of Small Businesses. Small Business Economics, 26, 83–101. doi:10.1007/s11187-004-6488-6 Stone, D. L., Lukaszewski, K. M., & Isenhour, L. C. (2005). E-Recruiting: Online strategies for attracting talent. In: H. G. Gueutal & D. L. Stone (Eds.), The Brave New World of eHR. Human Resources Management in the Digital Age. San Francisco: Jossey-Bass. Tavangarian, D., Leypold, M. E., Nölting, K., Röser, M., & Voigt, D. (2004) Is e-Learning the Solution for Individual Learning? Electronic Journal of e-Learning, 2(2), 273-280. Taylor, S., & Todd, P. (1995). Assessing IT usage: the role of prior experience. MIS Quaterly, December 1995, (pp. 561–570).
Uman, I. D. (2006). Wat is e-HRM? Retrieved, March 2006, from http://www.ehrmplein.nl/ van der Bos, M., & van der Heijden, H. (2004). E-HRM. In F. Kluytmans (Eds.), Leerboek personeelsmanagement. Groningen: Wolters-Nordhoff. van Riemsdijk, M., Bondarouk, T., & Knol, H. (2005). Researching Personnel Management in Dutch Small and Medium Sized Enterprises. A literature overview and a research model. Paper Presented at The International Dutch HRM Network Conference, 4-5 November 2005, Enschede, The Netherlands. Venkatesh, V. (2000, December). Determinants of perceived ease of use: Integrating control, intrinsic motivation, and emotion into the technology acceptance model. Information Systems Research, 11(4), 342–365. doi:10.1287/isre.11.4.342.11872 Venkatesh, V., & Davis, F. D. (2000, February). A theoretical extension of the technological acceptance model: Four longitudinal field experiments. Management Science, 46(2), 186–204. doi:10.1287/mnsc.46.2.186.11926 Venkatesh, V., & Morris, M. G. (2000). Why don’t men ever stop to ask for directions? Gender, social influence, and their role in technology acceptance and usage behavior. MIS Ouarterly, 24(1), 115–139. doi:10.2307/3250981 Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of Information technologies: Toward a unified view. MIS Quarterly, 27(3), 425–478.
1397
Exploring Perceptions about the use of e-HRM Tools in Medium Sized Organizations
Voerman, M., & Van Veldhoven, M. (2007). Attitude towards e-HRM: an empirical study at Philips. Personnel Review, 36(6), 887–902. doi:10.1108/00483480710822418 Watson, W. (2002). B2E / eHR. Survey results 2002. Reigate: Watson Wyatt. Williamson, I. O., Lepak, D. P., & King, J. (2003). The effect of company recruitment web site orientation on individuals’ perceptions of organizational attractiveness. Journal of Vocational Behavior, 63, 242–263. doi:10.1016/S0001-8791(03)00043-5 Wright, P., & Dyer, L. (2000). People in the ebusiness: new challenges, new solutions. Working paper 00-11, Center for Advanced Human Resource Studies, Cornell University. Wright, P. M., McMaham, G. C., Snell, S. A., & Gerhart, B. (2001). Comparing line and HR executives’ perceptions of HR effectiveness: services, roles, and contributions. Human Resource Management, 40(2), 111–123. doi:10.1002/hrm.1002
Yu, J., Ha, I., Choi, M., & Rho, J. (2005). Extending the TAM for a t-commerce. Information & Management, 42, 965–976. doi:10.1016/j. im.2004.11.001
KEY TERMS And definitions e-HRM: The support of HRM processes in organizations with the use of internet technology. HRIS: Human Resource Information Systems. Perceived Ease of Use: Degree to which a person believes that using a particular system would be free of effort. Perceived Usefulness: Degree to which a person believes that using a particular information system would enhance his or her job performance. SME: Small and medium sized organizations with fewer than 250 full time equivalent employees.
Yin, R. K. (1993). Applications of case study research. Applied Social Research Methods Series, Sage Publications, 34. This work was previously published in Handbook of Research on E-Transformation and Human Resources Management Technologies: Organizational Outcomes and Challenges, edited by Tanya Bondarouk, Huub Ruel, Karine Guiderdoni-Jourdain, and Ewan Oiry, pp. 304-323, copyright 2009 by IGI Publishing (an imprint of IGI Global).
1398
1399
Chapter 5.15
Adoption, Improvement, and Disruption:
Predicting the Impact of Open Source Applications in Enterprise Software Markets Michael Brydon Simon Fraser University, Canada Aidan R. Vining Simon Fraser University, Canada
abstract This article develops a model of open source disruption in enterprise software markets. It addresses the question: Is free and open source software (FOSS) likely to disrupt markets for commercial enterprise software? The conventional wisdom is that open source provision works best for low-level system-oriented technologies, while large, complex enterprise business applications are best served by commercial software vendors. The authors challenge the conventional wisdom by developing a two-stage model of open source disruption in enterprise software markets that emphasizes a virtuous cycle of adoption and leaduser improvement of the software. The two stages are an initial incubation stage (the I-Stage) and a subsequent snowball stage (the S-Stage). Case studies of several FOSS projects demonstrate the model’s ex post predictive value. The authors then
apply the model to SugarCRM, an emerging open source CRM application, to make ex ante predictions regarding its potential to disrupt commercial CRM incumbents.
INTRODUCTION Software firms increasingly face the possibility that free and open source software (FOSS) will disrupt their most profitable markets. A disruptive innovation is a new product, service, or business model that enters a market as a low-priced, underperforming alternative to offerings from market leaders, but which, through a process of rapid improvement, eventually satisfies the requirements of mainstream consumers and supplants incumbents (Christensen, 1997; Markides, 2006). Prototypical examples of disruptive innovations include discount online brokerages (which won
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Adoption, Improvement, and Disruption
significant market share away from established full-service brokerages) and personal computers (which evolved into a viable substitute for larger, more expensive mini and mainframe computers). The disruptive effect of FOSS on commercial software markets has so far been variable. At one extreme, the Apache project has forced commercial vendors of Web servers to either exit the market (IBM, Netscape), offer their products for free (Sun), or bundle their software at zero price with other offerings (Microsoft’s IIS). At the other extreme, FOSS entrants in the desktop operating system and office productivity software markets have had almost no impact on the dominance of incumbent commercial software vendors. Despite the economic importance of the commercial software industry, there has been little formal analysis of the factors that lead to major disruption by FOSS in some markets and negligible disruption in others. This is especially true for enterprise applications—the complex software programs that support critical, cross-functional business processes, such as order management, financial reporting, inventory control, human resource planning, and forecasting. FOSS, like all forms of open innovation (West & Gallagher, 2006), is characterized by recursive interdependency between adoption and technological improvement. To this point, open source production has worked best for software developed by hackers (software experts) for use by hackers. However, enterprise applications differ in important ways from commonly cited FOSS successes, such as Apache, the Perl programming language, and the Linux operating system. The intrinsic and culture-specific motivations that drive voluntary participation in FOSS projects by software experts are likely to be weaker or non-existent for business-oriented software (Fitzgerald, 2006). We might therefore expect FOSS to have less of an impact on the provision of enterprise software. However, an alternative theoretical perspective is possible. Under certain conditions, firms may have incentives to contribute to the open source
1400
development of enterprise applications such as enterprise resource planning (ERP), customer relationship management (CRM), and supply chain management (SCM). The willingness of firms to pay programmers to write code and contribute it to a FOSS project as part of their employees’ regular duties eliminates the importance of conventional hacker-level incentives in predicting whether a FOSS project will attract developers. Instead, the emphasis shifts to understanding the conditions under which a firm is motivated to make investments in a project for which it cannot fully appropriate the benefits. We attempt to resolve the conflicting perspectives regarding the potential impact of FOSS in enterprise software markets by developing a dynamic model of FOSS disruption. Our model draws on both the Disruptive Technology and the Adoption of Technology literatures, as neither literature alone can fully account for the high variability in the level of disruption achieved by FOSS in different software markets. The disruptive technology literature emphasizes the role of technological improvement over time in fostering disruption. For example, Christensen (1997) illustrates the disruption dynamic by plotting the historical performance improvement demanded by the market against the performance improvements supplied by the technology. Improvements in performance over time are undoubtedly critical to disruption; however, little is said about the precise mechanisms by which the improvements are achieved. As Danneels (2004) points out, an ex post analysis of general trends is of little use when making ex ante predictions about the disruptive potential of a particular technology. The key to predicting whether a technology is potentially disruptive or “merely inferior” (Adner, 2002) is the identification of plausible mechanisms for rapid and significant improvement along dimensions of performance that matter to mainstream users. Models from the Adoption of Technology literature are more forward-looking, in that they focus on specific attributes of the innovation in order
Adoption, Improvement, and Disruption
to predict the innovation’s adoptability. However, the critical shortcoming of most adoption models for our purposes is that they are static. Disruption is a fundamentally dynamic process where the attributes of both the innovation and the adopters of the innovation change over time. We attempt to capture these dynamics by modeling disruption as two distinct stages in which the nature of the interdependency between adoption and technological improvement changes significantly. The article proceeds as follows. In the next section, we review the adoption literature and focus on Fichman and Kemerer’s (1993) distinction between organizational adoptability and community adoptability. We then examine the open source model of production and the incentives for firm-level contribution to the production of a public good. We draw on the concepts of organizational and community adoptability to develop a two-stage model of FOSS adoption and disruption, and assess the ex post predictive power of the model by examining the development histories and subsequent market impacts of several established FOSS projects. We then apply the model to one type of enterprise software, CRM, to predict, ex ante, whether SugarCRM, a relatively new FOSS entrant, will disrupt the commercial CRM market. Finally, we summarize our conclusions and suggest some implications for theory and practice.
INNOVATION AND TECHNOLOGY ADOPTION A number of different adoption models in the literature seek to account for variability in the market success of new technologies (e.g., Ravichandran, 2005; Riemenschneider, Hardgrave, & Davis, 2002; Rai, Ravichandran, & Samaddar, 1998; Iacovou, Benbasat, & Dexter, 1995; Taylor & Todd, 1995). The theoretical foundations of these models are diverse (Fichman, 2000). Variously, they build on classic communication
and diffusion mechanisms (e.g., Rogers, 1995), institutional theory (e.g., Tingling & Parent, 2002), organizational learning (e.g., Attewell, 1992), or industrial economics (e.g., Katz & Shapiro, 1994). The purpose here is not to provide another general adoption model, but rather to develop a middlerange theory (Fichman, 2000) that is applicable to the specific context of open source production and markets for enterprise software. The initial point of our analysis is Fichman and Kemerer’s (1993) Adoption Grid (see Figure 1). The grid aspires to integrate the “Diffusion of Innovations” perspective (Rogers, 1995) and the “Economics of Technology Standards” perspective. Each perspective emphasizes a complementary set of adoption antecedents. The original Diffusion of Innovations perspective identified five attributes (relative advantage, compatibility, complexity, observability, and trialability) that affect the likelihood that a given population will adopt a new idea, product, or practice (hereafter, a technology). Although the subsequent literature has augmented and extended these attributes (e.g., Moore & Benbasat, 1991; Davis, 1989), the model has retained its original focus on the technology and the technology’s fit within the adopting organization. Fichman and Kermerer (1993) aggregate these attributes as organizational adoptability, arguing that: …organizations are more likely to be willing and able to adopt innovations that offer clear advantages, that do not drastically interfere with existing practices, and that are easier to understand… Adopters look unfavorably on innovations that are difficult to put through a trial period or whose benefits are difficult to see or describe. (p. 9) The Economics of Technology Standards perspective, in contrast, focuses on increasing returns to adoption. In considering this perspective, it is useful to make a distinction between direct increasing returns (network benefits or positive network externalities) and indirect in-
1401
Adoption, Improvement, and Disruption
Figure 1. Adoption Grid (Fichman & Kemerer, 1993)
creasing returns (Katz & Shapiro, 1994). The fax machine is a classic example of a technology that exhibits direct network benefitsthe value of a machine to a given user increases when the technology is adopted by others. However, indirect sources of increasing returns to adoption, such as learning-by-using and technology interrelatedness, are often more important in the enterprise software context. Learning-by-using is the process whereby the technology’s priceperformance ratio improves as users accumulate experience and expertise in using it (Attewell, 1992). Occasionally, the technology is reinvented during adoption and the resulting improvements are fed back to the supplier and other adopters, further amplifying indirect benefits (Rogers, 1995). Technological interrelatedness considers a technology’s ability to attract the provision of complementary technologies by third parties. In some cases, complementarity is straightforward: the adoption of DVD technology relied on the availability of movies in this format. In other cases, however, complementarity may emerge in the form of propagating organizations (Fichman, 2000), such as consultants, publishers, and standards organizations. Drawing on this perspective,
1402
Fichman and Kemerer summarize a technology’s community adoptability in terms of four attributes: prior technology drag, irreversibility of investments, sponsorship, and expectations. All four attributes are mediated to some extent by increasing returns to adoption. The historical evidence suggests that community adoptability can trump organizational adoptability in determining market outcomes when a technology exhibits strong increasing returns to adoption. Competition between Betamax and VHS and between DOS and the Mac OS shows that superiority in terms of technology-related attributes (such as relative advantage) does not necessarily ensure dominance (Hill, 1997). However, relying on community adoptability as the basis for predicting market success is problematic because the presence of direct and indirect increasing returns can create a logic of opposition (Robey & Boudreau, 1999). Direct increasing returns, learning-by-using, and the existence of complements can all catalyze the rapid adoption of a new technology (e.g., the MP3 format for digital audio); however, increasing returns can alternately be a potent source of prior technology drag (e.g., the QWERTY keyboard). In extreme cases, a
Adoption, Improvement, and Disruption
technology market that exhibits strong increasing returns to adoption tips in favor of a single product or standard (Shapiro & Varian, 1999). Firms that bypass the established standard in order to adopt a new and “better” technology must forego the benefits of the incumbent’s installed base (Farrell & Saloner, 1986). For example, firms that migrate from Microsoft Windows to the desktop version of Linux face high switching costs, independent of any quality differences of the Linux software, because of numerous incompatibilities with the large installed base of Windows. Variations in organizational adoptability and community adoptability define the four quadrants of the Adoption Grid shown in Figure 1. Innovations that fall in the experimental quadrant rate poorly in terms of both organizational and community adoptability and are therefore unlikely to succeed in the market without further development. Innovations that fall in the niche quadrant rank highly in terms of organizational adoptability but poorly in terms of community adoptability. Niche innovations typically achieve quick adoption by a small base of dedicated users who value the product’s attributes. However, either the absence of increasing returns to adoption or the presence of barriers to community adoptability dampens adoption beyond the niche. Innovations that fall into the slow mover quadrant provide community benefits but do not offer a compelling rationale (in terms of improved functionality or fit) for organizational adoption. Such technologies are often adopted only when replacement of the firm’s existing generation of technology becomes necessary (Hovav, Patnayakuni, & Schuff, 2004). Finally, innovations that fall in the dominant technology quadrant are those that score well in terms of both organizational and community adoptability.
OPEN SOURCE PRODUCTION Open source production relies on two innovations. The first is the form of licensing—often referred to
as ‘copyleft’—which precludes the enforcement of exclusionary property rights for the good (Weber, 2004). Under most FOSS licenses, the program’s source code can be downloaded, compiled, used, and modified by anyone interested in doing so. The second innovation is the use of low-cost, ubiquitous networks and collaborative tools that enable large-scale distributed software development and testing. Open source software is thus free in two senses: first, the non-excludability enforced by FOSS licenses in conjunction with the presence of high-speed transmission networks means that the software can be acquired at low marginal cost; second, access to the software’s source code means that users are able to adapt and improve the software. Although low cost and the freedom to innovate make FOSS attractive to potential adopters, it is less clear why participation in the provision of such software might make sense for developers. The central problem is that FOSS is a public good (Lerner & Tirole, 2002; Weber, 2004). Like all software, FOSS is non-rivalrous in consumption (consumption by one user does not preclude consumption by other users). And, in contrast to commercial software, FOSS licenses ensure nonexcludability (no one can be denied consumption or use of the good). Economic theory suggests that pure public goods will not be supplied or be seriously undersupplied by the market because individual contributors to the production of the good are not financially rewarded for their investment of time and other resources (Arrow, 1970). The question facing any potential adopter of FOSS software—but especially firms seeking complex, mission-critical systems—is whether provision and continuous improvement of the software is sustainable. Critics of what might be called the naive pure public goods expectations for FOSS point out that a blanket prediction of undersupply is an oversimplification of the incentive structures facing developers. Benkler (2002), for example, notes that Internet-based collaboration tools permit
1403
Adoption, Improvement, and Disruption
developers to make small, incremental contributions to open source projects at low (essentially negligible) personal opportunity cost. Others have emphasized the non-pecuniary and delayed or indirect pecuniary rewards that individuals receive from membership in the hacker culture (Weber, 2004; Himanen, Torvalds, & Castells, 2002; Lerner & Tirole, 2002). It is also important to recognize that the popular image of a lone hacker working voluntarily on a piece of shared code is no longer representative of the largest, most established FOSS projects. Although the initial releases of the Linux kernel and the PHP scripting language were the result of massive personal investments by Linus Torvalds and Rasmus Lerdorf, respectively, further refinement has been increasingly dominated by for-profit firms. Most of the development work on the Linux kernel, for example, is now contributed by professional software developers as part of their employment contracts (Lyons, 2004; Morton, 2005). The economic incentives driving the involvement of for-profit firms in FOSS are diverse. Some firms, such as IBM, Sun, and HP, expect to profit from stimulating the adoption of a wide spectrum of FOSS products in order to sell complementary products and/or consulting services. Other firms, such as Redhat and MySQL AB, have a similar interest, but have focused their attention on a single FOSS project. In some cases, dual licensing models are used in which the for-profit firm develops and sells an enhanced version of a commercial product built on a FOSS core. For example, MySQL AB maintains a FOSS version of the MySQL database system but sells a commercial version of the database as well as technical support and training. Such firms have incentives to use revenues from their commercial products and services to crosssubsidize the development of the public good at the center of their technological system. User-firms may also contribute to open source projects. User-firms are organizations—such as banks, utilities, consumer goods firms, governments, and non-profits—that rely on FOSS to
1404
support the production and delivery of their core products and services. User-firms are thus distinct from the many firms whose business models depend on developing or supporting FOSS. There are many aspects of organizational adoptability that attract user-firms to FOSS. The most obvious is initial cost—an entire stack of software applications can be assembled without paying licensing fees. User-firms may also adopt FOSS to avoid lock-in and excessive dependence on a commercial vendor. Finally, and most importantly from the perspective of this article, many userfirms adopt FOSS because it provides them with the flexibility to innovate and customize. For example, Siemens ICN division constructed its award-winning ShareNet knowledge management system by modifying an open source community system from ArsDigita (MacCormack, 2002). Google has developed a customized version of the Linux operating system that permits it to use a cluster of thousands of inexpensive computers to power its search service (Batelle, 2005). Modifications of existing products by leadusers can be a valuable source of innovation and product improvement (von Hippel, 1998). Innovations developed by lead-users typically have a high potential for market success for two reasons (Franke, von Hippel, & Schreier, 2006): (1) lead-users have incentives to innovate because the expected payoffs from finding solutions to their problems are large; and (2) as leads, they anticipate emerging market trends and the requirements of mainstream users. Lead-user development has traditionally functioned well in FOSS due to the relationship between the users and the software they use. Much of the source code for system-oriented software (such as Linux utilities, Sendmail, and Perl) has been contributed by individuals who have been motivated to improve the software in order to make their primary jobs easier (Weber, 2004). Moreover, the provision of an innovation toolkit (a critical enabler of lead-user development) is implicit in the norms and collaboration infrastructure provided by FOSS projects. This
Adoption, Improvement, and Disruption
toolkit is accessible to users of systems-oriented software because they typically possess the specialized knowledge and skills needed to implement their own solutions to software problems. FOSS licenses and the established culture of the FOSS community encourage users to contribute their solutions back to the project, thereby ensuring that the software improves as its adoption increases. The problem that arises in the context of enterprise applications is that the conditions that foster lead-user improvement of systems-oriented FOSS seem much less likely to occur. First, individual users of enterprise business software tend not to be software experts and operate instead within a non-technical business environment. Although such users possess valuable local knowledge about the business processes that the software supports, they generally lack the specialized knowledge and skills to navigate a version control system, modify source code, and justify their design decisions to the developer community. The innovation toolkit implicit in the open source model is thus inaccessible to most users of enterprise business software. Second, firms are normally motivated to discourage any valuable internal innovation from permeating the boundaries of the firm (Liebeskind, 1996). We argue, however, that firms have both the means and the motivation to act as lead-users in the development of enterprise FOSS. A fundamental reason that firms exist is to enable the division of labor and foster specialization. Accordingly, a lead-user need not be a single person—firms can and do hire experienced FOSS developers to implement the functionality desired by business users. The question is thus not whether a firm can participate in the improvement of enterprise FOSS, but rather why a firm would be willing to forego the advantages of proprietary control over its enhancements. We hypothesize two reasons, both which are unrelated to altruism or the culture of reciprocity in the open source community. The first relates to the role of packaged software in a firm’s strategy. Firms typically adopt packaged
enterprise applications (whether commercial or open source) to implement important but nonstrategic functionality (Hitt & Brynjolfsson, 1996). According to Beatty and Williams (2006): The vast majority of firms that chose to undertake ERP projects based their decision on vendor promises that their organizations would realize significant cost savings in their core business. An analysis of performance following ERP implementation supports the view that such systems are better at providing internal gains in efficiency and productivity (e.g., decreases in the number of employees per unit of revenue) than competitive advantage (e.g., increased revenue) (Poston & Grabski, 2001). Firms that recognize that incremental customization of enterprise software is unlikely to confer sustainable competitive advantage are unlikely to view enforcement of their property rights for their modifications as a high priority. Second, firms create a significant maintenance liability whenever they customize packaged software (Beatty & Williams, 2006). Future releases of the software may not be compatible with their customizations, and thus firms face a dilemma: they must either forego the new version’s improvements or re-implement their customizations to make them compatible with the new release. A firm that customizes and enhances a FOSS application has a strong incentive to contribute its code to the project’s main source tree in order to ensure institutionalized support for its changes in subsequent releases of the software. The benefits of remaining compatible with future releases of the software thereby outweigh the risks of passing a strategically valuable innovation to competitors. The overtly economic incentives for lead-user development of FOSS at the firm level are clearly different from the subtle, individual-level incentives commonly acknowledged to drive successful FOSS projects. However, from the perspective of our analysis, the existence of the incentives is more
1405
Adoption, Improvement, and Disruption
important than their root cause. The fact that firms are often motivated to use and contribute to the development of certain types of FOSS applications (specifically those that provide important but nonstrategic functionality) enables the comparisons in the following sections between well-known systems-oriented FOSS projects and emerging business-oriented enterprise FOSS projects.
A DYNAMIC MODEL OF OPEN SOURCE ADOPTION AND DISRUPTION Most disruptive technologies initially appeal to a niche (Christensen, 1997; Danneels, 2004; Tellis, 2006). The niche typically consists of consumers who desire products that are simpler, cheaper, and more convenient (Christensen, 2000). The lack of up-front licensing fees makes FOSS attractive to this niche, at least on the dimension of shortterm cost. However, simply being a low-price alternative to an existing technology is typically insufficient to disrupt an existing market. Disruption requires that the new technology improve dramatically over time along attributes valued by mainstream customers while still maintaining its appeal to initial niche adopters. Disruption is thus, at its core, a dynamic process in which technological improvement is critical. We propose a novel model of FOSS disruption that consists of two distinct stages: an initial incubation stage (I-Stage) and a subsequent snowball stage (S-Stage). During the I-Stage, the software is developed and improved until it reaches a threshold level of functionality and compatibility with existing practices. These improvements along the organizational adoptability dimension may permit the software to attract a critical mass of adopters. Rogers (1995) defines critical mass as the level of adoption that ensures that the innovation’s further rate of adoption is self-sustaining. As we discuss below, the notion of critical mass in our model is more specific:
1406
it triggers the transition from the I-Stage to the S-Stage. Most FOSS projects, despite possessing the advantages of low cost and flexibility, simply fail to achieve adoption beyond the original development team. Of the more than 75,000 projects hosted on SourceForge.net, a central repository for FOSS projects, only a small proportion are viable products with established niche markets (Hunt & Johnson, 2002). According to one analysis, “the typical project has one developer, no discussion or bug reports, and is not downloaded by anyone” (Healy & Schussman, 2003, p. 16). For the small proportion of FOSS projects that attract a critical mass of adoption, the internal mechanisms used during the I-Stage to achieve the critical mass are seldom transparent and can vary significantly from project to project. In many prominent FOSS projects, the threshold level of organizational adoptability was achieved through the efforts of a single individual or by a small number of individual hackers. In other cases, the threshold was achieved when the property rights holder granted source code for an established product to the FOSS community. An increasingly common means of achieving organizational adoptability is for a for-profit firm to develop a fully functional product under an open source license with the expectation of selling complementary commercial products and services (Dahlander, 2005). Regardless of the internal mechanisms of the I-Stage, some FOSS projects rate sufficiently high along the organizational adoptability dimension to attract a critical mass and make the transition to the S-Stage. The S-Stage is characterized by gradual, but cumulatively significant changes in both the adoption and improvement mechanisms of the technology. These changes are similar in scope to the distinct pre- and post-critical mass “diffusion regimes” that Cool, Dierickx, and Szulanski (1997) have identified. The change in adoption mechanism occurs as attributes of organizational adoptability (that is, properties of the technology itself) become relatively less important than at-
Adoption, Improvement, and Disruption
Figure 2. A dynamic model of adoption and disruption
tributes of community adoptability. For example, adoption beyond a niche typically requires what Moore (2002) calls a whole product solution—the provision of complementary products (such as middleware for connections to existing systems) and services (such as consulting and education) from propagating organizations. The change in improvement mechanism typically occurs as the development process shifts from developers to lead-users and from a small, cohesive team to a large, heterogeneous community (Long & Siau, 2007). We define the transition from the I-Stage to the S-Stage in terms of a critical mass of adopters (rather than a critical level of functionality or relative advantage) because of the recursive interdependence between adoption and improvement during the S-Stage. Although only a small proportion of adopters of the most successful FOSS projects actually contribute code, and although substantive participation by developers in a FOSS project is clearly critical to the project’s success, adoption, even by non-participants, is important it its own right. Non-developers may contribute to the project in other important ways, such as clarifying requirements, submitting bug reports, or providing valuable knowledge to other
users through discussion forums and mailing lists. Even users who do nothing other than download the software (and thus apparently ride free on the efforts of project participants) can contribute incrementally to the increasing returns associated with FOSS adoption. For example, the decision by providers of complementary products to support a particular FOSS project often depends critically on the size of the installed base of users. These indirect increasing returns to adoption in FOSS occur independently of the existence of any direct network benefits associated with the technology itself. Thus, a project such as the Apache Web server can achieve dominance due to learning-by-using during the S-Stage even though the software itself exhibits only weak network benefits.1 Conversely, the presence of increasing returns in the incumbent market can prevent SStage improvement for a new technology. This can occur if the market has already tipped in favor of an incumbent or if several similar and competing technologies split the pool of early adopters so no technology achieves critical mass. The two stages of our model can be visualized as a specific trajectory through the quadrants of Fichman and Kemerer’s (1993) Adoption Grid, as shown in Figure 2. First, the I-Stage results a in
1407
Adoption, Improvement, and Disruption
development effort that moves the technology from the experimental quadrant to the niche quadrant. The key to this transition is a threshold increase along the organizational adoptability dimension. Second, in the absence of barriers such as prior technology drag, the S-Stage results in improvements that move the technology from the niche quadrant to the dominant technology quadrant. Adoption in the S-Stage is driven primarily by the effects of increasing returns and determinants of community adoptability, rather than the intrinsic attributes of the technology.
Ex Post Prediction: FOSS Case Studies We assess the predictive value of our model by examining the development and adoption histories of a number of well-known FOSS projects. The flow chart in Figure 3 summarizes the main determinants of adoption and disruption that our model posits in terms of four sequential questions: 1.)>>
2.)>>
3.)>>
4.)>>
Does the software have sufficient organizational adoptability to attract a niche of adopters? Are there significant barriers to the adoption of the software, such as prior technology drag? Does a mechanism for improvement exist that is subject to increasing returns to adoption? Is there a competing product with similar advantages that divide community attention and resources?
The Apache Web Server One of the earliest freely available Web servers was developed by a group at the National Center for Supercomputer Applications (NCSA), a research lab sponsored by the U.S. government. The market for Web servers developed quickly at the start of the Internet boom in the early 1990s, and
1408
several core members of the NCSA Web server development team left NCSA to join Netscape, a commercial developer of Web browser and server software. By October 1994, Netscape’s Communications Server 1.0 was selling for $1,495, while its Commerce Server 1.0 was selling for $5,000.2 The NCSA Web server continued to be both popular and freely available; however, the loss of key personnel meant that the mechanism for collecting and applying updated excerpts of source code (or “patches”) ceased to function effectively. In 1995, an independent Web site developer named Brian Behlendorf and a small group of developers took over responsibility for managing patches and later that year they released “a patchy server.” As of October 2005, Apache was running on roughly 70% of all Web servers (Netcraft Inc., 2005). The transition from the experimental to niche quadrant for the NCSA server occurred because of development subsidized by the U.S. government. As the NCSA and Netscape Web servers had similar functionality (and development teams), the NCSA server’s relative advantage in terms of organizational adoptability initially rested on its low cost and its ability to run on multiple server platforms. In contrast, Apache’s S-Stage transition to market dominance can be attributed to improvements in its community adoptability. First, the emergence of standard protocols eliminated the major sources of lock-in and prior technology drag in the Web server market. Open standards for content (HTML), transfer (HTTP), and eventually security (SSL) meant that all Web servers were essentially compatible with all Web browsers. Second, Apache’s architecture was designed to be highly modular. The flexibility provided by modularity was essential during the period when the concept of a “Web server” was being continuously redefined. Complementary modules were developed, such as mod_ssl, which permitted the Apache Web server to draw on the encryption and authentication services of the OpenSSL package to provide secure transactions. In their survey of
Adoption, Improvement, and Disruption
adoption of security-related modules by Apache users, Franke and von Hippel (2005) found that lead-user development played an important role in the provision of modules. Although only 37% of users in the sample reported having sufficient programming skills within their server maintenance groups to modify Apache source code (that is, exploit the innovation toolkit), and although less than 20% of the respondents reported making actual improvements to the code, 64% of users were able and willing to install modular enhancements to the Web server developed by lead-users. A third element in Apache’s increase in community adoptability during the S-Stage was the emergence of a well-regarded governance mechanism for the Web server project. The Apache community (formalized in 1999 as the Apache Foundation) facilitated inputs by leadusers through effective patch management and emerged as a credible sponsor of the project. The Apache Foundation has since become the sponsoring organization for a large number of FOSS projects, many of which have no direct relationship to Web servers. Similarly, the O’Reilly Group, a publisher of technical books, contributed to the expectation that Apache would become dominant by promoting books describing the use of the Apache Web server in concert with the Linux operating system, MySQL database, and the Perl/Phython/PHP scripting languages—together forming the “LAMP stack” of complementary FOSS applications.
Eclipse Eclipse is an open source development framework used for writing software in Java and other standardized programming languages. The product was originally developed commercially by Object Technology International (OTI), but was acquired by IBM in 1996 as a replacement to IBM’s own commercial development environment, VisualAge. IBM subsequently invested an estimated $40 million in further refinements to Eclipse;
however, rather than releasing it as a commercial product, IBM granted the source code to the Eclipse Foundation in 2001 (McMillan, 2002) and has since remained a major supporter of the Eclipse project (Babcock, 2005). As of late 2005, Eclipse had acquired a market share in the integrated development environment (IDE) market of 20-30% and continues to grow, primarily at the expense of incumbent commercial products such as Borland’s JBuilder and BEA’s WebLogic Workshop (Krill, 2005). Eclipse’s I-Stage development resulted from the commercial development efforts of OTI and IBM. At the time that Eclipse was converted to a FOSS project, it already compared favorably along the organizational adoptability dimension to enterprise-level tools from incumbent commercial vendors. Eclipse’s S-Stage improvement has been rapid, due largely to the sponsorship of IBM, the relative absence of prior technology drag (due to support for standardized computer languages), and the accessibility of the innovation toolkit to the professional programmers that constitute the Eclipse user base. In addition, the reaction of commercial incumbents to the entry of Eclipse both increased its community adoptability and reduced barriers to adoption. Rather than compete with a high-performing FOSS product, incumbents such as Borland and BEA ceded the basic IDE market to Eclipse and have begun to reposition their commercial products as complements to the Eclipse core. The membership of these former competitors in the Eclipse Foundation has increased expectations that the Eclipse platform will become a dominant standard.
MySQL Relational Database MySQL is a FOSS relational database management system (RDBMS) controlled by MySQL AB, a for-profit firm that retains copyright to most of the program’s source code. Owing to MySQL’s dual licensing scheme, the program is available under both the GNU General Public License
1409
Adoption, Improvement, and Disruption
(GPL) and a commercial software license. Unlike the GPL, the commercial license enables firms to sell software that builds on or extends the MySQL code. MySQL’s I-Stage development depended primarily on the development effort of the three founding members of MySQL AB. Once in the niche quadrant, however, MySQL competed with other multiplatform FOSS RDBMSs, notably PostgreSQL and, as of 2000, Interbase (now Firebird). In terms of organizational adoptability, both PostgreSQL and Interbase had significant technical advantages over MySQL, including support for atomic transactions and stored procedures. However, such features mattered less to developers of dynamic Web sites than stability, speed, and simplicity—particular strengths for MySQL. MySQL’s edge over PostgreSQL in terms of these attributes led to its inclusion in the LAMP stack, an important factor in attracting complementary products such as middleware and education materials. The impact of prior technology drag on the community adoptability of MySQL has been relatively small, despite the presence of wellestablished incumbents such as Oracle, IBM, and Microsoft in the client/server database market. One explanation for this is “new market” disruption (Christensen, 1997). Dynamic Web site development was a relatively new activity that was not particularly well served by the commercial incumbents. Many of the small firms and experimental Web design units within larger firms had neither the resources nor the functional imperative to acquire enterprise-level client/ server databases and thus favored the simpler, low-cost alternatives offered by the open source community. In addition, the standardized use of SQL by all RDBMSs and existing middleware standards such as ODBC, JDBC, and Perl DBI minimized prior technology drag. According to the MySQL AB Web site, the program is now the world’s most popular database. Much of MySQL’s improvement along the organizational adoptability dimension resulted
1410
from the continued development efforts of MySQL AB (based, ostensibly, on feedback from lead-users). However, as MySQL became more widely adopted, it also attracted and incorporated complementary technologies from other FOSS projects. For example, InnoDB, a separate FOSS project that provides a more sophisticated alternative to MySQL’s MyISAM storage engine, permitted MySQL to close the gap between it and PostgreSQL/Interbase by including many of the advanced RDBMS features absent from MySQL’s original storage engine. Despite its improvements, however, a clear demarcation remains between the enterprise-level RDBMS segment dominated by Oracle and IBM and the middle and low-end segments now dominated by MySQL. Adner (2002) has shown that the heterogeneity in consumer requirements that leads to segmentation becomes less important as performance of products in all segments exceed customer requirements. However, MySQL’s SStage improvement mechanism may be adversely affected by a new generation of FOSS database competition. The grant to the open source community of two enterprise-level databases, SAP DB and Ingres, plus the emergence of EnterpriseDB (a commercially supported version of PostgreSQL), reduces the incentive for developers who require enterprise-level functionality and performance to add this capability to MySQL. Indeed, MySQL AB has acquired the commercial rights to develop and market SAP DB (now known as MaxDB) to provide an enterprise-level complement, the MySQL database engine.
OpenOffice The OpenOffice suite of desktop productivity tools (a word processor, spreadsheet, and presentation design program) is meant to compete against Microsoft’s dominant Office suite. The original product, known as StarOffice, was created by StarDivision, a commercial firm, but was purchased by Sun Microsystems in 1999. Sun
Adoption, Improvement, and Disruption
released the source code for the product to the Open Office project in 2000, but continues to develop a commercial version of StarOffice that includes a number of proprietary enhancements to the OpenOffice core. The I-Stage development of OpenOffice thus resembles the early development of Eclipse: a fledgling, but viable, product was purchased by a large software vendor and released as a fully functional FOSS project. The organizational adoptability of the niche product rested on two advantages over Microsoft Office: the ability to run on multiple platforms and the absence of a licensing fee. Despite its promising start, however, OpenOffice remains in the niche quadrant for three reasons. First, Microsoft Office imposes significant prior technology drag. Although OpenOffice is almost fully compatible with Microsoft Office, the lack of full compatibility imposes costs on those who choose not to participate in the dominant Microsoft Office network. Microsoft, as the incumbent technology standard, has strong incentives to prevent OpenOffice from achieving complete compatibility with Microsoft Office’s proprietary file formats (Hill, 1997). Second, OpenOffice suffers from a relative disadvantage in the number and quality of complementary products such as learning materials, templates, and add-ins. Third, and more controversially, the widespread piracy of Microsoft Office, particularly in the developing world, has partially neutralized OpenOffice’s cost advantage. Indeed, Microsoft’s reluctance to impose stricter piracy controls on its Office suite amounts to a versioning strategy to combat the emergence of disruptive alternatives (Farrell & Saloner, 1986). Of the relatively small number of adopters of OpenOffice who have overcome the prior technology drag imposed by the dominant commercial incumbent, many are non-programmers and are therefore unable to enhance OpenOffice’s source code (Brown, 2005). Although we argued above that firm-level contributions can overcome the separation between user and developer, the
components of the OpenOffice suite are primarily intended for individual use within the firm, not firm-level use. Firms may be willing to underwrite improvements to an enterprise database or Web server at the core of their enterprises. To this point, however, they have been less willing to pay to fix stability problems or interface annoyances in a PowerPoint clone. The barriers to adoption imposed by Microsoft Office combined with the lack of strong mechanisms for firm- or individual-level user development prevent OpenOffice from achieving the S-Stage improvement required for transition to the dominant technology quadrant. Barring an exogenous and significant increase in community adoptability—for example, widespread legislation mandating the use of non-proprietary file formats by government agencies—OpenOffice will remain in the niche quadrant and fail to achieve dominant status.
Summary of the Ex Post Case Studies Table 1 summarizes each of the FOSS examples in terms of the mechanisms used for I-Stage development and S-Stage improvement. Both Apache and MySQL have already achieved disruption in their markets (or market segment in the case of MySQL). Eclipse is almost certain to achieve dominance, given the supportive response from commercial incumbents and the potential for significant learning-by-using effects in a market in which all users are software developers. OpenOffice, in contrast, illustrates the failure of a major FOSS project to ignite the virtuous cycle of S-Stage adoption and improvement despite offering a base level of functionality that is comparable to commercial competitors. The I-Stage investment required to move a technology or product from the experimental quadrant to the niche quadrant is significant. Grants to the FOSS community of commercial products with established niches have become increasingly common, especially for products
1411
Adoption, Improvement, and Disruption
Table 1. Summary of FOSS cases FOSS Project
Mechanism for I-Stage Development
Key Attributes of Organizational Adoptability (niche quadrant)
Mechanism for SStage Improvement
Key Attributes of Community Adoptability (dominant technology quadrant)
Apache Web Server
Grant of Web server code from NCSA
Low cost, modular structure, multiplatform
Lead-user development by Web administrators
Adherence to emerging W3 standards, sponsorship by the Apache Foundation, increased expectations due to central role in the LAMP stack
Eclipse Integrated Development Framework
Grant of code by IBM, subsequent investments by IBM in the FOSS project
Low cost, enterprise-level functionality
Commitment to development by major commercial tool vendors, lead-user development
Adherence to standards, multiplatform, development of modules for multiple languages, sponsorship of IBM
MySQL Relational Database
Development effort by founders of MySQL AB
Low cost, speed, simplicity
Lead-user requests for features, development by MySQL AB
Integration into LAMP stack, formal sponsorship of MySQL AB, informal sponsorship through O’Reilly LAMP books
Open Office Personal Productivity Software
Grant of source code by Sun Microsystems
Low cost, basic functionality, basic compatibility with Microsoft Office file formats
Development by Sun (StarOffice)
Slow adoption due to prior technology drag (network benefits, complementary assets), some incompatibility with MS Office formats
that have failed to achieve the dominant position in tippy markets. For example, both SAP DB and Ingres were mature, enterprise-level products competing in a market segment dominated by Oracle and IBM prior to being converted to FOSS. Such grants are seen by some as a way for commercial software firms to abandon underperforming products without alienating the product’s installed base. In the cases of Eclipse and StarOffice, the contributors’ motivations may have been more strategic, prompted by direct competition with Microsoft. Once in the niche quadrant, the forces of community adoption appear to be more important than the overall organizational adoptability of the technology. S-Stage improvement leads to increasing adoption and increasing adoption feeds further S-Stage improvement. For Apache, Eclipse, and to a lesser extent, MySQL, leaduser development continues to be the dominant improvement mechanism because many users of such products have strong technical skills. Apache
1412
and MySQL benefit from firm-level contributions, since they occupy critical roles in a firm’s technology infrastructure. On the other hand, all three of the disruptive FOSS projects have benefited from open industry standards, which have reduced or eliminated prior technology drag. Community adoptability becomes more difficult to assess when multiple FOSS projects compete against one another. The probability that multiple FOSS projects achieve niche status within the same market segment increases as commercial products are converted to FOSS projects. For example, the conversion of Interbase to an FOSS project in 2000 by Borland created competition within the mid-tier FOSS database segment for both users and developers. Much the same problem exists in the high-end segment due to the conversion of SAP DB and Ingres. As the Economics of Technology Standards literature shows, predicting the dominant technology in such cases is extremely difficult.
Adoption, Improvement, and Disruption
EX ANTE PREDICTION: CRM AND THE THREAT OF FOSS DISRUPTION CRM software enables firms to develop relationships with their customers. At its most basic level, CRM software provides the data and interfaces necessary for sales force automation. More generally, however, CRM “requires a cross-functional integration of processes, people, operations, and marketing capabilities that is enabled through information, technology, and applications” (Payne & Frow, 2005, p. 168). CRM is thus similar to ERP and SCM systems in terms of its scope, organizational impact, and technological requirements. All three types of enterprise software involve significant organization-wide commitments to shared processes and infrastructure. Moreover, all three provide support for important, but ultimately non-strategic business processes. We therefore believe that our model and analysis extend beyond the case of CRM and apply to enterprise software generally. Commercial CRM software firms can be divided into three major strategic groups. The first group consists of the three leading CRM vendors: SAP, Oracle, and Siebel (recently acquired by Oracle). These firms target large organizations with complex, high-performance CRM implementations. The second group consists of a larger number of smaller vendors that target small to medium-size businesses (SMBs). Microsoft, Pivotal, Onyx, and SalesLogix are this group’s market share leaders (Close, 2003). The third strategic group consists of “hosted” CRM vendors. A host, or application service provider (ASP), rents Internet-enabled CRM software to organizations for a subscription fee. Salesforce.com actively targets the SMB segment with particular emphasis on non-users (i.e., SMBs that have yet to adopt any CRM product). The CRM industry’s separation into strategic groups is consistent with segmentation theories in information goods’ markets. Suppliers of information goods incur high initial development costs
in order to produce the first copy of the product. Once produced, however, the marginal cost of producing an additional copy is effectively zero. In addition, suppliers of information goods face no physical limitations on production capacity (Shapiro & Varian, 1999). Consequently, the competitive equilibrium price of an undifferentiated information good available from multiple suppliers approximates the good’s zero marginal cost. Suppliers in information markets therefore risk a catastrophic price collapse if their products become commoditized. Such collapses have occurred in several markets, including Web browsers, encyclopedias, and online stock quotes. In general, there are three generic strategies for avoiding ruinous price-based competition in markets for information goods: differentiation, domination, and lock-in (Shapiro & Varian, 1999). Suppliers seek to avoid commoditization by differentiating their offerings based on some combination of their own capabilities and the heterogeneous requirements of customers. Accordingly, Siebel, Oracle, and SAP compete on the advanced features, scalability, and reliability of their software. Since many SMBs are unwilling to pay for these attributes, an opportunity exists for lower cost, lower functionality mid-tier CRM vendors (Band, Kinikin, Ragsdale, & Harrington, 2005). Domination, in contrast, requires cost leadership through supply-side economies of scale in fixed-cost activities such as administration, distribution, and marketing. For this reason, competition within a segment often leads to growth through consolidation. Finally, first movers may be able to retain a degree of pricing power by erecting high switching costs that lock in customers. Lock-in is common in the enterprise software market because of the high switching costs that flow from differences in underlying data models and formats, and the existence of indirect increasing returns to adoption. Moreover, the magnitude of the ongoing revenue streams generated from upgrades, customization, and integration provide explicit disincentives for vendors to reduce switching costs. A study of pack-
1413
Adoption, Improvement, and Disruption
aged enterprise software by Forrester (Gormley, Bluestein, Gatoff, & Chun, 1998) estimated that the annual cost for such maintenance activities is 2.75 times the initial license fee. Commercial CRM software vendors have attempted each of the three generic strategies for avoiding price-based competition: they have differentiated themselves into high- and midrange segments, moved towards domination of a segment by consolidation, and erected high switching costs through proprietary data models and product-specific training. The result is a relatively stable two-tiered CRM market. However, the stability of this market structure is threatened by three potential sources of disruption. The first is prototypical, low-end disruption from contact management products, such as Maximizer and Act. Although such tools are superficially similar to CRM software (they contain customer data, for example), contact management products are intended for single users or workgroups and do not provide the enterprise-wide integration of true CRM tools. The second and third potential sources of disruption are business model disruptions rather than product disruptions. Rather than defining a new product, business model disruption alters the way in which an existing product is offered to customers (Markides, 2006). The ASP model for CRM, pioneered by Salesforce. com, does not require customers to install and maintain software. ASPs service all their clients from a single central resource over the Internet. The apparent lower lifecycle cost to consumers leads to increased adoption, which in turn leads to greater economies of scale for the ASP. This virtuous cycle of adoption may ultimately permit Salesforce.com to achieve market dominance. The third candidate for disrupting the CRM market is emerging FOSS enterprise applications, such as Hipergate, Vtiger, Compiere, and SugarCRM. Of these FOSS entrants, SugarCRM is currently the clear frontrunner in CRM. According to our dynamic model, both low-end disruption (by Act and Maximizer) and business
1414
model disruption by Salesforce.com are unlikely. I-Stage development for commercial contact management software requires a significant investment by either the software’s developer or by others with incentives to do so. However, both the mid- and high-end segments of the CRM market are already contested by firms with mature products and installed customer bases. There is little incentive for Act or Maximizer to make large investments in order to enter these highly competitive segments. The community adoptability of Salesforce.com, in contrast, is bounded by the ease with which competitors can match both its ASP delivery mechanism and its pricing model. The presence of high switching costs means that the structure of the commercial CRM market will change more as a result of consolidation than disruption by other vendors of closed-source software. Disruption by a FOSS CRM program, such as SugarCRM, is more likely. SugarCRM is a dual-licensed application built on top of the functionality provided by the Apache-MySQL-PHP language interpreter stack. Its functionality, simplicity, and low cost have permitted it to establish a position in the niche quadrant: according to SugarCRM’s Web site, the product has been downloaded more than 1.5 million times.3 The application is entirely Web based, allowing it to be offered in ASP mode, much like Salesforce.com. Professional software developers backed by venture financing undertook the I-Stage development of SugarCRM. The founders of SugarCRM Inc., all former employees of a commercial CRM vendor, secured more than $26 million in three rounds of financing in the 18 months following the company’s incorporation in April 2004. Given its establishment as a niche product, the question is whether SugarCRM possesses a plausible mechanism for S-Stage improvement. Following our ex post analyses from the previous section, we decompose this question into two parts: (1) whether incumbents in the CRM market impose significant barriers to community adoptability, and (2) whether the SugarCRM community has
Adoption, Improvement, and Disruption
the capacity to improve the product to a point to where it is functionally comparable to the offerings from mid-tier incumbents.
ible sponsor of the SugarCRM community and has reinforced expectations that the community of adopters will continue to grow.
Barriers to Community Adoptability
Mechanisms for S-Stage Improvement
The lack of direct network benefits in the CRM market means that the primary source of prior technology drag is switching costs. The use of proprietary data models and user interfaces in the CRM markets mean that changing to a different CRM product involves significant data migration and retraining. Consequently, SugarCRM will appeal most to non-users—firms that have been to this point unable to justify the cost of commercial CRM products. However, another possibility exists: some firms may perceive that they are excessively dependent on their current CRM vendor and may seek to avoid long-term vendor hold-up (an extreme form of lock-in) by switching to a FOSS CRM provider. Extreme vendor dependence also arises in the enterprise software market due to the irreversibility of investments. Enterprise applications typically exhibit increasing returns to adoption within the firm. Specifically, many of the benefits of enterprise systems accrue from sharing data across multiple business processes and functions. The benefits of integration are difficult to assess during a localized pilot or trial, and thus implementation of an application such as CRM requires a significant, organizational-wide commitment in training and process redesign. The risk of making such investments and then being stranded with a non-viable technology may lead some firms to favor well-established CRM vendors, such as Oracle and SAP. However, FOSS licensing and access to source code also reduce vendor dependence. The risk of being stranded with an orphaned FOSS technology depends more on the viability of the community of adopters than on the viability of a single firm. The relatively large amount of venture financing accumulated by SugarCRM Inc. has established it as a cred-
In our view, FOSS CRM has a plausible mechanism for S-Stage improvement. New-market adopters and firms seeking to decrease their vendor dependence will help FOSS CRM achieve a critical mass of users. These users have incentives to improve the product, thereby further closing the performance gap with incumbent commercial solutions. We therefore predict that FOSS CRM will make the transition from the niche quadrant to the dominant quadrant and disrupt the commercial CRM market. The question of whether SugarCRM in particular will disrupt the commercial CRM market is more difficult to answer for two reasons. First, heterogeneous requirements across different vertical markets (e.g., financial services and consumer products) may lead to a large number of vertical-specific customizations. In such circumstances, the community may decide to fork the code into different projects for different vertical markets rather than attempt to manage the complexity of a single large code base. In such a case, the CRM market could become fragmented. Second, the economic opportunities facing incumbent commercial CRM vendors are at least as favorable as those facing SugarCRM Inc. Any incumbent commercial vendor could convert its application to a FOSS license and rely on revenue from complementary products and services to replace lost license revenue. Unlike SugarCRM, the incumbent firms have already established installed bases and networks of complementary products and services.
Summary: Predicting Disruption by FOSS As the flow chart in Figure 3 shows, SugarCRM satisfies the conditions identified by our model 1415
Adoption, Improvement, and Disruption
Figure 3. A flow chart to predict disruption by FOSS
drag or by attempting to control the ecology of complementors that support the new entrant. A possible example of the latter response is Oracle’s acquisition of InnoDB. Commercial vendors may also release (or threaten the release) of a competing FOSS project. For example, Sun’s release of its Unix-based operating system, OpenSolaris, may have been an attempt to further fragment the open source community that develops and maintains Linux. The requirement to achieve a critical mass of adoption before S-Stage improvement can become self-sustaining means that such tactics by commercial incumbents can be effective in slowing or preventing disruption by FOSS entrants.
IMPLICATIONS FOR THEORY AND PRACTICE
for successful disruption. However, the flow chart also suggests possible defensive responses by commercial incumbents to the emergence of niche FOSS competitors. Commercial software vendors may be able to erect barriers to community adoptability by maximizing prior technology
1416
In this article, we present a dynamic model of adoption and disruption that can help managers better understand the process of disruption by free and open source software. We illustrate the application of the model by analyzing the history of four well-known FOSS projects and applying the model to the case of SugarCRM, a FOSS CRM project. We predict that the FOSS model of production will disrupt the existing commercial CRM market; however, we cannot make theory-based predictions about whether a particular FOSS CRM project, such as SugarCRM, will be disruptive. SugarCRM currently rates highest among FOSS CRM products along the dimension of community adoptability. However, a measure of Christensen’s influence on practice is that firms are now more aware of the effects of disruptive innovations. Commercial incumbents facing disruption may act preemptively to undermine SugarCRM’s sources of relative advantage or to displace it entirely as the leading FOSS contender. The model proposed in this article contributes to the developing theory of disruptive innovation (Christensen, 2006; Danneels, 2004) by describing
Adoption, Improvement, and Disruption
the disruption process in terms of an established model of adoption. Fichman and Kermerer’s (1993) Adoption Grid provides a concise synthesis of the adoption literature. We build on this synthesis to identify two temporally distinct stages that must occur before a FOSS project can move from an experimental to dominant level of adoption. Our model can be seen as a middle-range theory (Fichman, 2000), in that it describes the adoption of a particular technology (FOSS) in a particular context (enterprise applications such as CRM, ERP, and SCM). Our work suggests several avenues for future theoretical and empirical research. First, our hypothesized inflection point between I-Stage and S-Stage development challenges the notion of a monolithic FOSS production method. Several researchers have noted inconsistencies in cross-sectional studies of practices as they are embodied in a few FOSS successes vs. those in the vast majority of FOSS projects (Niederman, Davis, Greiner, Wynn, & York, 2006; Healy & Schussman, 2003). We believe that our theoretical understanding of many aspects of the FOSS phenomenon not addressed in this article—including governance structures, software development techniques, and innovation-generating processes—will have to become more dynamic, temporal, and consequently contingent. Moreover, longitudinal studies of FOSS showing dramatic shifts in, for example, project governance and internal project dynamics are required to support our hypothesis of a two-stage progression from obscurity to dominance. A second area for future research is the development of a better understanding of the expanding role of user-firms in FOSS development. For example, much of the economic rationale for firm-level lead-user development rests on the assumption that a user-firm can work within the FOSS community to have the firm’s modifications and enhancements incorporated into a particular project. However, it is not clear how conflicting objectives between multiple firms within a FOSS
project might be resolved or whether firms have incentives to behave strategically within the project. In this way, the dynamics of firm-level participation in FOSS resemble those of firmlevel participation in standard-setting bodies (Foray, 1994). Finally, the policy implications of our twostage model have not been addressed. As our analysis of existing FOSS projects show, some valuable software applications become public goods by accident. Substantial sunk investments are made during the I-Stage, with the expectation that the resulting software will be a commercially viable private good. The public policy implications of subsidizing I-Stage development in order to exploit S-Stage expansion and refinement have not been explored, but are worthy of further research.
Acknowledgment The authors acknowledge the Social Sciences and Humanities Research Council of Canada (SSHRC) Initiatives for the New Economy (INE) program for financial support. We would like to thank Matthew Brownstein for his excellent research assistance.
REFERENCES Adner, R. (2002). When are technologies disruptive? A demand-based view of the emergence of competition. Strategic Management Journal, 23(8), 667–688. Arrow, K. (1970). Social choice and individual values (2nd ed.). New Haven, CT: Yale University Press. Attewell, P. (1992). Technology diffusion and organizational learning: The case of business computing. Organization Science, 3(1), 1–19.
1417
Adoption, Improvement, and Disruption
Babcock, C. (2005). Eclipse on the rise. Retrieved January 29, 2006, from http://www.informationweek.com
Dahlander, L. (2005). Appropriation and appropriability in open source software. International Journal of Innovation Management, 9(3), 259–285.
Band, W., Kinikin, E., Ragsdale, J., & Harrington, J. (2005). Enterprise CRM suites, Q2, 2005: Evaluation of top enterprise CRM software vendors across 177 criteria. Cambridge, MA: Forrester Research.
Danneels, E. (2004). Disruptive technology reconsidered: A critique and research agenda. Journal of Product Innovation Management, 21(4), 246–258.
Battelle, J. (2005). The search: How Google and its rivals rewrote the rules of business and transformed our culture. New York: Penguin. Beatty, R., & Williams, C. (2006). ERP II: Best practices for successfully implementing an ERP upgrade. Communications of the ACM, 49(3), 105–109. Benkler, Y. (2002). Coase’s penguin, or, Linux and the nature of the firm. The Yale Law Journal, 112(3), 1–42. Brown, A. (2005). If this suite’s a success, why is it so buggy? Retrieved March 15, 2006, from http:// www.guardian.co.uk Christensen, C.M. (2000). After the gold rush. Retrieved January 30, 2006, from http://www. innosight.com
Davis, F. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 318–339. Farrell, J., & Saloner, G. (1986). Installed base and compatibility: Innovation, product preannouncements, and predation. American Economic Review, 76(5), 940–955. Fichman, R.G. (2000). The diffusion and assimilation of information technology innovations. In R. Zmud (Ed.), Framing the domains of IT management: Projecting the future through the past. Cincinnati, OH: Pinnaflex. Fichman, R.G., & Kemerer, C.F. (1993). Adoption of software engineering process innovations: The case of object orientation. Sloan Management Review, 34(2), 7–22. Fitzgerald, B. (2006). The transformation of open source software. MIS Quarterly, 30(3), 587–598.
Christensen, C.M. (1997). The innovator’s dilemma: When new technologies cause great firms to fail. Boston: Harvard Business School Press.
Foray, D. (1994). Users, standards and the economics of coalitions and committees. Information Economics and Policy, 6(3-4), 269–293.
Christensen, C.M. (2006). The ongoing process of building a theory of disruption. Journal of Product Innovation Management, 23(1), 39–55.
Franke, N., & von Hippel, E. (2003). Satisfying heterogeneous user needs via innovation toolkits: The case of Apache security software. Research Policy, 32(7), 1199–1216.
Close, W. (2003). CRM suites for North American MSBs markets: 1H03 magic quadrant. Stamford, CT: Gartner Inc. Markets. Cool, K.O., Dierickx, I., & Szulanski, G. (1997). Diffusion of innovative within organizations: Electronic switching in the Bell system, 1971-1982. Organization Science, 8(5), 543–560.
1418
Franke, N., von Hippel, E., & Schreier, M. (2006). Finding commercially attractive user innovations: A test of lead user theory. Journal of Product Innovation Management, 23(4), 301–315. Gormley, J., Bluestein, W., Gatoff, J., & Chun, H. (1998). The runaway costs of packaged applications. The Forrester Report, 3(5).
Adoption, Improvement, and Disruption
Healy, K., & Schussman, A. (2003). The ecology of open source software development. Retrieved January 8, 2007, from http://opensource.mit.edu/ papers/healyschussman.pdf Hill, C.W.L. (1997). Establishing a standard: Competitive strategy and technological standards in winner-take-all industries. Academy of Management Executive, 11(2), 7–25. Himanen, P., Torvalds, L., & Castells, M. (2002). The hacker ethic. New York: Random House. Hitt, L., & Brynjolfsson, E. (1996). Productivity, profit, and consumer welfare: Three different measures of information technology’s value. MIS Quarterly, 20(20), 144–162. Hovav, A., Patnayakuni, R., & Schuff, D. (2004). A model of Internet standards adoption: The case of IPv6. Information Systems Journal, 14(3), 265–294. Hunt, F., & Johnson, P. (2002). On the Pareto distribution of SourceForge projects. Proceedings of the Open Source Software Development Workshop (pp. 122–129), Newcastle, UK. Iacovou, C.L., Benbasat, I., & Dexter, A.S. (1995). Electronic data interchange and small organizations: Adoption and impact of technology. MIS Quarterly, 19(4), 465–485. Katz, M.L., & Shapiro, C. (1994). Systems competition and network effects. Journal of Economic Perspectives, 8(2), 93–115. Krill, P. (2005). Borland upgrading IDE while preparing for eclipse future. Retrieved January 30, 2006, from http://www.infoworld.com
Long, Y., & Siau, K. (2007). Social network structures in open source software development teams. Journal of Database Management, 18(2), 25–40. Lyons, D. (2004). Peace, love and paychecks. Retrieved January 30, 2006, from http://www. forbes.com MacCormack, A. (2002) Siemens ShareNet: Building a knowledge network (case 603036). Cambridge, MA: Harvard Business School Press. Markides, C. (2006). Disruptive innovation: In need of better theory. Journal of Product Innovation Management, 23(1), 19–25. McMillan, R. (2002). Will Big Blue eclipse the Java tools market? Retrieved January 27, 2006, from http://www.javaworld.com Moore, G.A. (2002). Crossing the chasm: Marketing and selling high-tech products to mainstream customers (revised ed.). New York: HarperBusiness Essentials. Moore, G.C., & Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research, 2(3), 192–222. Morton, A. (2005). Lead maintainer, Linux production kernel. IT Conversations: SDForum Distinguished Speaker Series. Retrieved January 31, 2006. Netcraft Inc. (2005). October 2005 Web server survey. Retrieved December 5, 2006 from http:// news.netcraft.com/archives/2005/10/04/october_2005_web_server_survey.html
Lerner, J., & Tirole, J. (2002). Some simple economics of open source. Journal of Industrial Economics, 50(2), 197–234.
Niederman, F., Davis, A., Greiner, M., Wynn, D., & York P. (2006). A research agenda for studying open source I: A multi-level framework, Communications of the AIS, 18(7), 2–38.
Liebeskind, J.P. (1996). Knowledge, strategy, and the theory of the firm. Strategic Management Journal, 17, 93–107.
Payne, A., & Frow, P. (2005). A strategic framework for customer relationship management. Journal of Marketing, 69(4), 167–176.
1419
Adoption, Improvement, and Disruption
Poston, R., & Grabski, S. (2001). Financial impacts of enterprise resource planning implementations. International Journal of Accounting Information Systems, 2(4), 271–294.
Tingling, P., & Parent, M. (2002). Mimetic isomorphism and technology evaluation: Does imitation transcend judgment? Journal of the Association for Information Systems, 3(5), 113–143.
Rai, A., Ravichandran, T., & Samaddar, S. (1998). How to anticipate the Internet’s global diffusion. Communications of the ACM, 41(10), 97–106.
von Hippel, E. (2005). Democratizing innovation. Cambridge, MA: MIT Press.
Ravichandran, T. (2005). Organizational assimilation of complex technologies: An empirical study of component-based software development. IEEE Transactions on Engineering Management, 52(2), 249–268. Riemenschneider, C.K., Hardgrave, B.C., & Davis, F.D. (2002). Explaining software developer acceptance of methodologies: A comparison of five theoretical models. IEEE Transactions on Software Engineering, 28(12), 1135–1145. Robey, D., & Boudreau, M. (1999). Accounting for the contradictory organizational consequences of information technology: Theoretical directions and methodological implications. Information Systems Research, 10(2), 167–185.
von Hippel, E. (1998). Economics of product development by users: The impact of ‘sticky’ local information. Management Science, 44(5), 629–644. Weber, S. (2004). The success of open source. Cambridge, MA: Harvard University Press. West, J., & Gallagher, S. (2006). Challenges of open innovation: The paradox of firm investment in open-source software. R&D Management, 36(3), 319–331.
ENDNOTES )>>
All leading Web servers rely on the same standard protocols including HTML, HTTP, and SSH. The strong network benefits thus occur at the protocol level, rather than the level of the application software that implements the protocols.
)>>
Netscape’s company name at the time was Mosaic Communications. It was changed shortly after. An order form is shown at http:// www.dotnetat.net/mozilla/mcom.10.1994/ MCOM/ordering_docs/index.html
)>>
http://www.sugarforge.org/ (accessed March 22, 2006)
1
Rogers, E.M. (1995). Diffusion of innovations (4th ed.). New York: The Free Press. Shapiro, C., & Varian, H.R. (1999). Information rules: A strategic guide to the network economy. Cambridge, MA: Harvard Business School Press. Taylor, S., & Todd, P.A. (1995). Understanding information technology usage: A test of competing models. Information Systems Research, 6(2), 144–176.
2
3
Tellis, G.J. (2006). Disruptive technology or visionary leadership? Journal of Product Innovation Management, 23(1), 34–38. This work was previously published in Journal of Database Management, Vol. 19, Issue 2, edited by K. Siau, pp. 73-94, copyright 2008 by IGI Publishing (an imprint of IGI Global).
1420
Section VI
Managerial Impact
This section presents contemporary coverage of the managerial implications of enterprise information systems. Particular contributions explore relationships among information technology, knowledge management, and firm performance, while others discuss the evaluation, adoption, and technical infrastructure of enterprise information systems. The managerial research provided in this section allows administrators, practitioners, and researchers to gain a better sense of how enterprise information systems can inform their practices and behavior.
1422
Chapter 6.1
A Domain Specific Strategy for Complex Dynamic Processes Semih Cetin Cybersoft Information Technologies, Turkey N. Ilker Altintas Cybersoft Information Technologies, Turkey Ozgur Tufekci Cybersoft Information Technologies, Turkey
Abstract This chapter identifies the issues that might create orthogonal complexities for process dynamism, and decouples the components implementing them in a “domain specific” way. Authors believe that traditional process management techniques for modeling and executing the processes still fall short to improve the dynamism of an enterprise. Some of the reasons are: using too “generic” techniques and tools for process management that are not scalable enough for typical business cases, having lack of architectural coverage to manage the tradeoffs between dynamism and other business quality issues, insufficient support for integrating legacy business processes, and unbalanced guidance beDOI: 10.4018/978-1-60566-669-3.ch018
tween “primary” and “supportive” processes. In order to improve the business agility particularly with dynamic processes, effective abstraction and composition techniques are needed for the systematic design of primary and supportive processes in an organization. Authors bring in the “Domain Specific Kit” abstraction as a way to improve the dynamism of complex processes.
Introduction For many decades, enterprises have been looking for efficient, reliable, flexible and adaptable processes. The increasing agility in business world enforces organizations to be more dynamic in every possible way. But, traditional process management techniques for modeling and automating the core
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Domain Specific Strategy for Complex Dynamic Processes
business processes fall short to enhance the dynamism in an enterprise. In traditional business process management, IT mainly abstracts the complexities of business processes with automated methods and tools that unsurprisingly introduce the categorization of processes as “primary” and “supporting” ones. In order to improve the dynamism of complex business processes, IT departments should no longer be the roots of this process categorization, but rather they should provide the right toolset to business departments for flexible process modeling and execution. However, this is not that much easy to achieve. Proposals abound to segregate the business and IT perspectives for dynamic processes, but these efforts have fallen short so far for many practical cases. Some of the issues behind this incapability can be detailed as follows: first, existing approaches are too “generic” to be used for every sort of complex [business] process. On the other hand, organizations run the business in different domains and expectedly, they have to comply with different process requirements. One example is integrating different processes of two organizations, one from banking domain and the other from automotive domain, for the processing of consumer loan for automobile sale transaction. The composition of their processes could not be simply orchestrated at run time without having a process choreography model at design time. The generic “process orchestration” models or strategies cannot easily solve service-oriented quality issues such as cross-domain security protocols (Tufekci et al., 2006; Aktas and Cetin, 2006). The second issue is the lack of architectural coverage. The “dynamism” of a complex process is primarily a “non-functional requirement”, which cannot be achieved by using pure “functional” approaches. Rather, architectural modeling plays an important role here to ensure the process quality. Hence, modeling the business processes with declarative approaches and implementing them using only Web Services will not
be enough for process dynamism. This minimalist “functional” thinking cannot help design the architectural aspects (i.e. security, performance, flexibility, modifiability, extensibility and adaptability) of dynamic complex processes. Instead, a reference architecture model (in a meta-level) is needed to conceptually design the domain specific components of process management. That is why Service Level Agreements (SLA) is still a debate in the Service-Oriented Architecture (SOA) community to compose services of different processes (Keller and Ludwig, 2002). The third drawback is the lack of support for integrating the legacy processes. We know that enterprises still have to run the business with legacy processes worth of billions of dollars, which cannot be simply reshaped in a night. Thus, any strategy to improve the dynamism of complex processes should consider the existing process assets accordingly. The domain specific abstractions could help in that sense to abstract the complexities of existing services and processes so that they can be migrated in a reasonable period of time (Sneed, 2000; SEI, 2005; Ziemann et al., 2006; Cetin et al. at ICPS, 2007; Cetin et al. at ICSEA, 2007). Additionally, existing approaches focus on the dynamism of “primary processes” but, on the other hand, they mostly neglect the dynamism of “supporting processes” and “organizational processes” (Havey, 2005; Tufekci et al., 2006). This degrades the overall process dynamism since supporting and organizational processes used by IT departments may easily put a barrier against the agile processes of business departments. This is almost the case when IT departments should comply with software process improvement standards (Yeh, 1991; Cetin et al. at EuroSPI2, 2006). The last difficulty occurs in the setup procedures of service and process execution infrastructures for business agility. The classical way of setting up information systems to model and execute the complex processes follows agile or heavyweight methodologies, but with one com-
1423
A Domain Specific Strategy for Complex Dynamic Processes
mon characteristic in mind that “business people asks and IT people provides”. This approach usually ends up with “highly tailored” models that cannot be flexible or extensible for future expectations. On the other hand, many concerns such as business rules, workflows, services, content generation and batch processing can be segregated from the actual application to facilitate the dynamism of process modeling and process execution. This way of isolated process development requires selective development lifecycles where related functional and non-functional requirements need to be identified, encapsulated, modeled, designed and implemented in a “domain specific” way (Fowler, 2002; Cetin and Altintas, 2008; Altintas, 2007). The chapter will define a strategy in an extensive manner and explain the details how this strategy can overcome the introduced challenges for dynamic complex processes. The strategy given here is not a mere conceptual or academic study. Rather, it has been already put into action for the implementation of MEROPS (Central Operations Management) of a mid-scale bank in Turkey. MEROPS is a specialized unit in the bank responsible for management of documents and associated workflows in a very dynamic business environment coping with ever-changing business requirements of more than 2500 bankers every day. In order to meet the business needs in a timely manner, a complete infrastructure has been designed and implemented due to the strategy given here where business workflows, business rules, business services and associated content can be created and modified dynamically by means of Domain Specific Kits and Choreography Modeling. This work has been partly supported by TUBITAK1.
Background Business processes are highly complex, involve many different participants and spawn multiple
1424
information systems. Running business processes is no longer possible without support from modern information technology. Moreover, optimizing business processes is crucial for business success of companies. Therefore, the processes have to be continuously improved and have to be flexible enough to deal with dynamic environment in times of global competition (Burmeister et al., 2006). However, the existing picture of enterprise process management is rather different. Processes are almost statically modeled, lack of standard and flexible process execution infrastructures, and far from being managed declaratively. In traditional business process management, IT mainly abstracts the complexities of business processes by means of hard-coded methods and primitive tools that unsurprisingly introduce the categorization of processes in an enterprise. Inspired by software process modeling, which derives the well-known standards like ISO 12207, ISO/IEC 15504 SW-CMM and CMMI, Figure 1 proposes a categorization template to deal with the major vertical and horizontal relations and feedback mechanisms inherited in process categories (Tufekci et al., 2006). In Figure 1, “primary processes” represent the processes constituting the core business of an organization. They can be figured out by itemizing those services the organization is established for. In order to exemplify, acquisition and development processes are primary processes for the software industry, and manufacturing processes are primary processes for the automotive industry. Primary processes are accompanied by “supporting processes”, which do not typically result in final products of the organization, but rather indirectly contributes to the value added. They are performed to control and maintain integrity of the product or service developed by primary processes. Additionally, they ensure that products and processes comply with predefined provisions and plans. Documentation, configuration management, verification, training and audit
A Domain Specific Strategy for Complex Dynamic Processes
Figure 1. Major process categories
process are supporting processes. Surely the categorization is relative to the business domain. As an example, audit process is the primary process of an audit organization whereas it is a supporting process for an ISO 9001 certified manufacturing company. Finally, “organizational processes” include the processes establishing business goals of an organization. They are formed as generic activities that apply to all processes either primary or supporting. Managerial processes, resource and infrastructure processes, and improvement processes are all in organizational process category. The interference of organizational processes with other processes is bi-directional; they provide the goals, plans and resources to other processes and accumulate the operational data for measurement analysis and improvement. The traditional view of business process modeling originates from two distinct perspectives of business and IT experts, and their respective
primary and supporting processes. For instance, consider a bank. Undoubtedly the banking processes are the primary processes. On the other hand, software development is considered as a supporting process for a bank, whereas it is a primary process for an IT vendor or IT department. The identical processes are perceived as primary or supporting from different angles. The business and IT perspectives are two distinct viewpoints to the same software development process. Business people primarily concentrate on the functional requirements, business rules, and need to employ the new system as well as alter the system as needed in a very short time. In contrast, IT people will be focusing on the architectural issues, quality factors such as reliability, robustness, etc. These two different perspectives define different “domains” both for business and IT departments. A “domain” is defined here at its simplest form as “the bounded area of interest and dedication for problem solving”. The problem
1425
A Domain Specific Strategy for Complex Dynamic Processes
usually occurs when these two distinct domains have to be harmonized somehow, where IT and business experts are expected to understand each other’s domain terminology entirely. As an example, banking experts are expected to know what Web 2.0 stands for, whereas IT experts are expected to know, for example, what EBITDA (Earning Before Income Tax, Depreciation and Amortization) actually means. The discrepancy between two perspectives is mostly apparent in inefficient development cycles, over-budget and over-time software projects, architectural problems, performance issues, etc. The traditional process management given in Figure 1 usually ends up with lengthy development time, loss of communication, more dependency to business-unaware IT professionals, vicious change management cycles, etc. One way to manage the challenges of process dynamism is separating the perspectives of primary and supporting processes in different viewpoints and composing them through a common strategy (see Figure 2). Figure 2 shows the segregated perspectives of business and IT where the supporting processes of business perspective are joined with the primary processes of IT perspective. A “battlefield process” in Figure 1 is the one that cannot be clearly identified as either IT process or business process. Rapid prototyping of user interface (screens and reports) is a typical battlefield process. In most of the information Figure 2. Discrepancy between perspectives
1426
system development efforts, battlefield processes are well known as the “risky loss of communication” as almost every software development life cycle suggests iterative / incremental approaches to reduce the risk accordingly. Optimizing the battlefield processes will affect the primary business processes, the supporting IT processes and, above all, the organizational processes positively. Thus, the effective management of battlefield processes is crucial to the success of both IT and business perspectives. Figure 2 summarizes the following vision: IT departments should no longer be the roots of process categorization; rather they should provide the right toolset to business departments for flexible process modeling and execution. By doing so, both business experts and IT experts can be focused on their own domains and the interaction between the two perspectives can be limited to the battlefield processes. Declarative approaches might help a lot to synthesize the languages (Domain Specific Languages [DSLs] such as Business Process Execution Language) specific to these completely different domains for the better management of battlefield processes. The similar strategy has been put by previous research such as (Pesic and van der Aalst, 2006; Küster et al., 2006). However, existing research has not concentrated enough on domain specificity (Domain Specific Modeling, use of Domain Specific Languages or Domain Specific Toolset) for complex dynamic processes management. Some proposals abound to achieve the separation between business and IT perspectives for more flexible, adaptable and dynamic processes, too. For example, workflow management models, methods and systems have been categorized and presented by (van der Aalst and van Hee, 2002). Workflow patterns have been classified and presented extensively by (Russell et al., 2006; van der Aalst et al., 2003). Likewise, process patterns have been catalogued by (Ambler, 1998). Workflow flexibility and process patterns contributed a lot to process dynamism, but existing research
A Domain Specific Strategy for Complex Dynamic Processes
on how legacy implementations can take part in workflow or process patterns have not been detailed so much. Especially, the non-functional issues of process integration with legacy systems are still major research questions. From the industrial standards perspective, modeling and execution languages for business processes have been put forward to manage them in dynamic environments. Abstract modeling languages like Business Process Modeling Language (BPML, 2002) and execution languages such as (BPEL, 2004) for business processes have been defined, and even a Business Process Modeling Notation (BPMN, 2004) has been put forward to design business processes in a more abstract way. Later, WS BPEL 2.0 has been the convergence path for different languages from different vendors in order to specify business processes based on Web Services, which has completely changed the former BPEL approach (WS-BPEL 2.0, 2007). Supportively, associated frameworks and tools have been developed to manage complex business processes in a dynamic environment. WS-BPEL 2.0 is the widely accepted standard for today’s industrial applications, however the non-functional issues of service selection and composition urge system vendors such as IBM, Oracle, SAP and Microsoft to focus on more aspect-oriented service bus implementations, which are still researched at laboratories. The research results might change the WS-BPEL structure in the near future to synthesize the different Enterprise Service Bus implementations. Process modeling plays an important role for process dynamism. The previous research has shown that Model-Driven Development (MDD) can help transform the business process models into actively running dynamic processes. For example, (Scacchi, 1998) has researched the challenges of process modeling especially for visualizing and enacting in complex organizations, and proposed an approach for modeling the processes accordingly that helps for dynamic management. (Kumaran et al., 2007) has introduced a modeling
framework, which adapts and extends workflow technologies to address the unique requirements of the IT service management domain. These extensions are primarily in three areas: life-cycle management, dynamic process execution, and federated workflow. (Shi et al, 2005) provided a service-oriented and business process model driven platform for developing on-demand (dynamic) business systems, where the platform uses two separate views (business and IT views) to relieve them from unfamiliar domains. The platform contains a reasoning engine to support the key rules that are presented for automatic process transformation from business view to IT view. The research for the use of MDD in complex dynamic process management has mainly concentrated on the Model Driven Architecture (MDA) of Object Management Group (OMG). However, the challenges of complex dynamic process management urge us to further research the potential value of Domain Specific Modeling especially from aspect-oriented and architecturebased perspectives. Aspect orientation is another technique to help achieve process dynamism. By using the key principles of Aspect-Oriented Software Development (AOSD), the crosscutting concerns such as business rules and workflows can be designed separately and weaved into the final picture dynamically. For instance, the previous research has shown that business rules can be segregated from the business processes to improve process dynamism (Cetin et al. at DPM, 2006; Cibran and D’Hondt, 2006; Date, 2000). Similarly, workflows can be totally separated from the applications using aspect-oriented techniques. (Tombros, 1999) has proposed an aspect-oriented workflow model to separate the static and dynamic aspects of a workflow system. This approach empowers business modelers to control and monitor the dynamic parts of a business model for improving the process dynamism. Domain specific approaches and their use for modeling dynamic processes are not new at
1427
A Domain Specific Strategy for Complex Dynamic Processes
all. More than two decades ago, (Winograd and Flores, 1986) suggested a different way to look at computers and the way they are being used. Their vision had its roots in philosophy, biology and architecture. Extrapolating from cognition as a biological phenomenon, they conclude that the most successful designs are not those that try to fully model the domain in which they operate, but those that are “in alignment” with the fundamental structure of that domain, and that allow for modification and evolution to generate new structural coupling. Then, (Dragos and Johnson, 1998) improved this philosophy to present an infrastructure for process and product models where structural coupling plays a central role. They have described the components of a common infrastructure as a “domain model engine” that stands on the shoulders of first version of the “Adaptive Object Model (AOM)”. Two years after the introduction of “domain model engines”, (Yoder and Razavi, 2000) rephrased the AOM and introduced the “micro-workflow” where a domain specific model engine has been designed for adaptive workflow systems that tackles the workflow problem at the object-level. By addressing the use of domain specific approaches for dynamic processes, (Dou et al., 2005) have already mentioned the “data perspective” of domain specificity for achieving dynamic business processes, where the extracted workflow constituents can be represented by domain specific applications. Accordingly, a hybrid workflow system could be treated as a collection of domain-specific applications integrated across application frameworks. Similarly, (Wild et al., 2006) have presented the dynamic engines, which allow for the flexible IT support of business processes in highly dynamic environments like the logistics industry, by extracting process logic and business rules from application programs, and applying the cases on domain specific calculation functions.
1428
The Proposed Strategy The process modeling today tends to demarcate the perspectives of business and IT experts, and their respective primary and supporting processes. In addition to separating the perspectives of business and IT domains, dynamism in complex [business] process management can be achieved by paying attention to concerns such as flexibility, extensibility, modifiability and adaptability that are all known as “architectural aspects”. Architectural modeling of enterprise applications that can execute complex business processes is probably more complicated than any other part of the software design process. For that reason, the architecture of enterprise applications should be modeled by taking multiple concerns and their tradeoffs into account. These concerns and tradeoffs usually exist orthogonal to the software processes (Harrison, 2002; Cetin et al. at ICSEA, 2006). Moreover, it is worth to mention that such crosscutting aspects that are orthogonal to actual business processes will be the main cause of “battlefield processes” given in Figure 2. In order to manage the tradeoffs among flexibility, modifiability and adaptability across different processes, proper abstraction and composition techniques need to be used for the architectural design of process execution. To this end, the proposed strategy here first identifies the architectural components that might create orthogonal complexities for process dynamism and implements them in a domain specific manner. Then, it introduces a choreography platform (a choreography language and an engine) to compose the previously isolated domain specific parts. The proposed strategy is based on the following issues: •)>>
Separation of business and IT perspectives (domains): the clear separation of business and IT domains facilitates both parties to focus on their own domains
A Domain Specific Strategy for Complex Dynamic Processes
•)>>
•)>>
•)>>
•)>>
(field of interest), which will lessen the burden of learning each other’s domain terminology. Abstraction of business and IT perspectives with domain specific approaches: both parties (business and IT) have different process viewpoints where each one has its own complexities to be dealt with. Besides that, providing dynamism for each one has certain different characteristics. For instance, the process dynamism that can be introduced by workflow flexibility might pay off at maximum level for the business perspective, but not exactly for the IT perspective. Composition of both perspectives throughout battlefield processes: the distinct processes of each perspective should be aligned at certain points which are named as “battlefield processes” in Figure 1. The strategy adopts a choreographybased strategy to align these processes. In doing so, a descriptive choreography language is used to bind the business and technical contexts of both perspectives. Abstracting legacy integration in a domain specific way: actually, integration with legacy processes means two different things for business and IT. Hence, generic approaches to address both perspectives usually fall short for any one of them or sometimes for both at the same time. The domain specific approaches adopted for separating both perspectives will also help at this point to integrate the legacy processes. The use of Domain Specific Languages and Domain Specific Engines to execute them will have dedicated keywords for the integration of legacy processes. Achieving quality by means of process choreography reference architectures: apart from achieving process dynamism, other quality factors should be concerned
•)>>
especially against potential tradeoffs. For instance, process dynamism with declarative workflow flexibility might introduce security leaks in process management unless designed carefully. The strategy has a choreography setup based on reference architectures for certain business domains. As an example, different architectural templates at meta-level are expected to be designed for different business domains such as banking, insurance, or automotive. Simplifying the setup process: the setup process should not further complicate the business and IT perspectives. Domain specific languages shorten the learning curves, and improve the feedback between parties with simplified prototyping and pilot runs. Moreover, configurations can be abstracted in every possible way by means of domain specific engines instead of generic workflow engines or enterprise service busses.
The proposed strategy has been mainly driven by the Software Factory Automation (SFA) vision (Altintas et al., 2007; Altintas, 2007). The SFA vision is highly motivated by the fact that constructing versatile software products or setting up complex business processes can no longer be managed by means of generic approaches such as “Extreme Programming is the right software process model for every kind of business agility” or “.NET framework and Java application servers are sufficient for all sort of transactional applications”. Instead, SFA envisages that evergrowing complexity of primary and supporting processes should be identified individually and managed independently. This motivation, however, requires having a broad palette of expertise within an organization that can be continually used for every new case, which is almost impossible. This is the actual picture in automotive industry for example,
1429
A Domain Specific Strategy for Complex Dynamic Processes
where automobile manufacturers cannot have the expertise for every detail like airbags or parking sensors. The way to tackle this challenge is setting up a factory process with moving assembly lines for integrating the in-house manufactured parts with the ones acquired from third parties. Then, the following questions come to mind: who will set up the factory and which sort of expertise is needed to form the assembly processes? The answers have already been found for “industrial factories”: there are different organizations around just to set up factories for the manufacturing of typical products. These organizations are known as “[industrial] factory automation vendors”, which have well defined, tested and proven processes to set up factories. After setting up such a factory, the factory management is then putting its own processes or customizing the factory automation vendor’s processes to manage the product-manufacturing life cycle. In short, industrial factory owners are far beyond to setup their own factories by themselves since this requires completely different expertise and associated processes. Instead, they ask for the help of industrial factory automation vendors to set up the factories for themselves. But, once the factory has been established, the factory owners can use the facilities (infrastructures, processes, etc.) provided by factory automation vendors to create and dynamically manage the actual processes. What is the existing picture for delivering the software products? It is totally different since many organizations feel confident enough to set up its own IT processes even without getting help from the experts. Open source madness and dominance of agile software processes nowadays encourage enterprises to walk alone. But, the final outcome does not usually differ so much: they end up with monolithic, non-maintainable, static, and incapable primary (business) processes. To top it off, supporting and organizational processes are mostly ad-hoc and unmanageable, too. On one hand, manufacturing processes
1430
have been improved for “zero defect” products even in avionics industry, but on the other hand, software industry is still discussing the value of “unit tests” and “refactoring” for dynamic complex processes. The dilemma between these two pictures has been mentioned as “essential difficulties” and “accidental difficulties” in the famous article of (Brooks, 1987). He points out that essential difficulties for an enterprise should be the ones having direct effects on core business such as the difficulties in setting up a sales process to minimize the sales force costs for a mobile vendor. However, accidental difficulties are the ones helping achieve the core business functionality but not having direct effects on that such as the difficulties of using Java instead of C++ for implementing the sales business processes. Both primary and supporting processes in software industry need to be deal with essential difficulties rather than accidental ones. The rise of Python, PHP and Ruby against C++, Java and C# for Web-based process implementations is just one simple evidence. Software Factory Automation is inspired by the way other industries have been realizing industrial factory automation for decades. Industrial factory automation utilizes the concept of “Programmable Logic Controllers (PLCs)” to facilitate the production of domain specific artifacts in isolated units. PLCs may also take place in moving assembly lines to unify the production process. Factory automation in milk factories, for example, bridges diverse units of pasteurization, bottling, bottle tapping, and packaging through moving assembly lines, all designed by the use of PLCs. Bottle and bottle tap in this example are both domain specific artifacts that can be reused in the production of various milk products such as regular, skimmed or semi-skimmed, or even in bottling of straight or sparkling water, not just milk. PLCs improve the reusability of domain specific artifacts with a consistent design in mind: PLC has a Programmable Processor (PP) to be
A Domain Specific Strategy for Complex Dynamic Processes
programmed with a Computer Language (CL) through a Development Environment (DE). So does the Domain Specific Kit (DSK) abstraction of SFA model. Figure 3 shows that a Domain Specific Kit has a Domain Specific Engine corresponding to Programmable Processor, a Domain Specific Language corresponding to Computer Language, and a Domain Specific Toolset corresponding to Development Environment of PLC concept. As the way PLCs are used for abstracting a wide range of functionalities like basic relay control or motion control, Domain Specific Kits in Software Factory Automation approach can be designed specifically to abstract certain things such as screen/report rendering or business rule execution in software factories. Such a vision in product development naturally abstracts the associated processes from each other and clearly draws the boundaries of business and supportive processes.
Separating The Concerns with Domain Specific Kits Domain Specific Kits have been devised to isolate discrete concerns in the solution domain and specify reusable Domain Specific Artifacts to abstract them. They let the modeling and development of software artifacts in isolation and enable their composition via a choreography model (Cetin et al. at MoRSe, 2006). DSK introduces a set of domain specific terms (constituents) and Figure 4 depicts the interrelation among these constituents: •)>>
Domain Specific Language (DSL) is a language dedicated to a particular domain or problem with appropriate built-in abstractions and notations. An example to a DSL is the Structured Query Language (SQL) of relational database management systems, which can command the backend database engine both for data description (Data Description Language - DDL) and
Figure 3. Software Factory Automation analogy with Industrial Factory Automation
Figure 4. Conceptual model of Domain Specific Kits
1431
A Domain Specific Strategy for Complex Dynamic Processes
•)>>
•)>>
•)>>
•)>>
•)>>
1432
data manipulation (Data Manipulation Language – DML). Domain Specific Engine (DSE) is an engine particularly designed and tailored to execute a dedicated Domain Specific Language. An example to a DSE is the “Database Engine” of relational database management systems, which can be commanded by SQL and execute the database queries as specified so. Domain Specific Toolset (DST) is an environment to design, develop, and manage software artifacts of a dedicated Domain Specific Language. An example to a DST is the “Interactive SQL (iSQL) Tool” that can be used by database designers and user to retrieve database schema graphically, construct the queries visually, and execute the SQL commands directly. Domain Specific Kit (DSK) is the composite of Domain Specific Language, Domain Specific Engine and Domain Specific Toolset. An example to a DSK is the “Relational Database Management System” that includes SQL as the DSL, Database Engine as the DSE, and iSQL Tools as the DST. Domain Specific Artifact Type (DSAT) is a software artifact type that a certain Domain Specific Kit can express, execute and facilitate the development. An example to a DSAT is “Stored Procedure” or “Trigger” that can be used to command the database engine for bulk or eventbased operations. Domain Specific Artifact (DSA) is an artifact that is expressed by a Domain Specific Language, developed by a Domain Specific Toolset, and executed by a Domain Specific Engine. An example to a DSA is the “Stored Procedure for Personnel Database Export” that can be expressed in SQL and executed directly by the Database Engine.
(Griss and Wentzel, 1998) used the term “Domain Specific Kits” first within the context of “flexible software factories”. However, the Domain Specific Kit concept in Software Factory Automation model diverges from the Griss’s definition and attributes a new content to the old term. DSKs in SFA are lightweight and loosely coupled with each other; so their artifacts (DSAs) can be designed as composable with others. The artifacts are defined and composed by declarative approaches. Furthermore, artifacts can be developed and contained within reusable software assets. In order to achieve that, DSK design aims to maximize the reuse of Domain Specific Artifacts like screen and report layouts or certain business rules (Altintas et al., 2008). Finally, DSKs are not particular to a product family, and even they can be reused across different product lines (Altintas and Cetin, 2008). In order to exemplify the Domain Specific Kit abstraction, a Business Rules Management System (BRMS) has been discussed here briefly. A BRMS enables the segregation of business rules from the application where they crosscut almost every tier from content to service (Cetin et al. at DPM, 2006). In our designs, we abstract the business rules management from the rest of the picture with a practical Aspect-Oriented Framework so called RUMBA ([RU]le-based [M]odel for [B]asic [A]spects), which provides a declarative environment with a GUI console for rule-based business process modeling. It basically enables the design of any business entity (e.g. person) through the dynamic composition of feature-driven “basic aspects” (e.g. identity as a permanent feature and instructorship as a varying feature used when person is expected to be an instructor). Moreover, every basic aspect such as instructorship may contain other basic aspects, recursively. RUMBA allows the dynamic definition of facts, rules, and rule-sets, too. RUMBA can be referenced as a Domain Specific Kit, which isolates the business rules management from the rest of the picture as fol-
A Domain Specific Strategy for Complex Dynamic Processes
lows: it has a very lightweight rule inference engine as the Domain Specific Engine, a visual rule editor as the Domain Specific Toolset that can dynamically manage the business rule changes, it uses the RuleML (RuleML, 2004) as the Domain Specific Language: Moreover, rule and composite-rule are the Domain Specific Artifact Types. RUMBA can take part in business choreography with API-based, service-based, or other type of interfaces.
Composing The Concerns with Choreography Domain Specific Kit abstraction enables the separation of concerns in different domains. Each concern has been expressed and executed by a different Domain Specific Language and the associated Domain Specific Engine like PLCs are controlling certain concerns in industrial automation. Naturally, every separation should end up with a composition as well. For the composition of domain specific artifacts expressed in a Domain Specific Language, Software Factory
Automation employs a choreography model (a language and an engine) that relies on SOA as a paradigm for managing resources, describing process steps, and capturing interactions between a service and its environment (see Figure 5). Software Factory Automation anticipates the use of a choreography language for describing the interaction of artifacts according to choreography’s goal. Composing Domain Specific Artifacts declaratively with a choreography language enables the independent development of Domain Specific Artifacts and also ensures interoperability of them. Domain Specific Engines are the execution engines for Domain Specific Artifacts and they are composed via a choreography engine. Choreography engine requires the separation of concerns across different Domain Specific Engines and, thus, deferred encapsulation (Greenfield et al., 2004) can be achieved through plugging in and out any Domain Specific Engines as needed. The deferred encapsulation enables the flexibility, modifiability and dynamism of implemented processes. This provides an execution model for collaborative and transactional business processes based on
Figure 5. Software Factory Automation for service-oriented computing
1433
A Domain Specific Strategy for Complex Dynamic Processes
a transactional finite-state machine. This is a non-monolithic execution model, and it does not need every sort of detail to be specified at once in the beginning of process design. The features mapped into specific Domain Specific Languages are going to be executed by corresponding Domain Specific Engines. Therefore, dynamic pluggability and contextawareness of Domain Specific Engines are crucial for the runtime execution model. The choreography engine enables communication and coordination among Domain Specific Engines. It ensures context management, state coordination, communication, produce/consume messaging, nested processes, distributed transactions, and process-oriented exception handling. The flexibility resulting from composition of loosely coupled artifacts is crucial for process dynamism. The choreography model of the strategy has been designed in compliance with the Web Services Composite Application Framework (WS-CAF, 2003). It provides the coordination among parties (in our case Domain Specific Engines), manages the common context and manages transactions for interoperability across existing transaction managers, for long running compensations, and for asynchronous business process flows. The common context for the proposed strategy includes the session identifier and session context, security attributes, transaction identifier and context, protocol specific attributes, client identifier, business domain specific identifiers, such as “branch code” or “customer id” for banking context. Domain Specific Kits bring out the concept of reusable, configurable, specialized work units in terms of services within the context of business processes. The proposed strategy abstracts the details of business services with Business Service Description DSLs and makes them available for the integration with other business services and software artifacts through the choreography model. This aids business service and business process designers to model the core business
1434
rather independent from the other process quality issues such as security and availability. Such aspects should have been dealt with during the design of the reference architecture specific to that business domain.
Setting up A ServiceOriented Software Factory with SFA Approach The Software Factory Automation approach with Domain Specific Kits enables the specification of coarse-grained reusable asset models as collections of Domain Specific Artifacts and their composition rules. This strategy encapsulates the correlated features (hence artifacts) within more cohesive software asset models and manages them with higher-level abstractions. As an example from the financial product family systems, “Alert and Notification Manager” and “Blacklist Manager” are two examples of software assets, which are highly abstract with respect to objects, components and even services used by traditional business process management approaches (Altintas et al., 2008). Figure 5 shows that SFA instantiates a dedicated software factory for a product or a sufficiently complex information system where business components can be integrated into the Service-Oriented Architecture. Here, two different process types exist: one for setting up the relevant software factory and the other for constructing the product or information system from that software factory. The processes for setting up the software factory are rather straightforward and defined within the context of SFA vision (Altintas et al., 2007). The software factory setup first instantiates a concrete architecture from the SOA-based product line reference architecture, organizes the development environment, describes the service asset model, and aligns the primary and supporting processes from standard process
A Domain Specific Strategy for Complex Dynamic Processes
models. For example, the software factory can be set up with supportive processes selected from the software process library of Lighthouse (Cetin et al. at EuroSPI2, 2006), such as ISO 9001 compliant or CMMI compliant. Similarly, primary processes can be selected as declarative implementations with Business Processes Kit (see Table 1) based on jBPM (JBoss Java Business Process Management) or service-based hard-coded implementations. The selection of primary and supporting processes from the software factory template will form the concrete “Process and Service Quality Model” in the instantiated software factory (see Figure 5). The final activity will be selecting or implementing the proper Domain Specific Kits and integrating them. In our software factories, we use Aurora (Altintas and Cetin, 2005) as the SOA-based reference architecture for product line design
and software artifact compositions. However, SFA approach enables other reference models such as Spring Framework (Spring, 2003) to be used in software factories. Aurora is a platform independent product line infrastructure based on rich client and enterprise integration models for Web applications with drag_&_drop design and development environments / tools. Aurora adopts rich-client strategy where a dedicated rendering engine is placed automatically on Web clients, which renders the screen layouts and executes the GUI events specified in an XML structure known as EBML (Enhanced Bean Markup Language). EBML is capable of declaratively expressing the reusable screen regions, defining sanity checks and arithmetic expression rules, executing local and remote method calls, versioning and caching structural parts separately, dealing with static reference data, managing the client context and multi-lingual support (Altintas and
Table 1. List of Domain Specific Kits used in banking system DSK Name
Description
Artifact Types
Rich Client Kit
Business domain independent XML-based technology used for power screen design in Internet applications
Page, Region, Popup
Reporting Kit
Business domain independent XML-based technology used for report content generation, rendering and presentation in Internet applications based on an open source project JasperReports that can be retrieved from http://jasperforge.org/sf/projects/jasperreports
Report
Business Services Kit
A lightweight kit for development, publishing, administration of business services with a registry, repository, meta-model and policy management services
Service
Business Rules Kit
Business domain independent kit for business rules segregation where all aspects, facts, rules and rule-sets can be defined and managed dynamically by means of a GUI console. It has been based on RUMBA Framework (Cetin et al. at DPM, 2006)
Rule and Composite-Rule
Business Processes Kit
A jBPM-based kit for business process management providing design, development and execution of business processes. jBPM is the open source project implemented by JBoss, and can be retrieved from http://www.jboss.com/products/jbpm
Process
Data Persistence Kit Also known as Persistent Object Model [POM] Kit
An XML-based Object-to-Relational (O2R) mapping kit for defining, deploying and executing SQL queries by mapping to Plain Old Java Objects (POJOs)
POM
Batch Processes Kit
A special purpose kit for defining, scheduling and execution of batch jobs with enterprise-class features, based on open source project Quartz that can be retrieved from http://www.opensymphony.com/quartz/
Job
1435
A Domain Specific Strategy for Complex Dynamic Processes
Cetin, 2005). EBML descriptions are automatically recognized by the rendering engine (ERE – EBML Rendering Engine) on Web clients. ERE completely takes away the need for static GUI codes and empowers the business service and business process designers to manage the cases dynamically at run time. ERE can also instantiate business processes and invokes business services implemented at the backend. Backend services are quite lightweight in Aurora reference architecture. The backend Aurora servers (simple Java servlet extensions) interpret the business service and business process requests expressed in XML and automatically convert them to POJO (Plain Old Java Object) calls. In this transformation, service names are matched with the previously registered Java service interfaces and service parameters are transformed into composite and indexed Java data structures known as CSBag. Then, the service can be invoked dynamically as a high performance Java call. After service execution, service results are transformed back from composite Java data structure (CSBag) back to XML constituents specified at EBML remote call section. This SOA vision provides a very scalable, secure, high performance and dynamic service infrastructure, and even Java based business processes expressed in jBPM can be incorporated. Moreover, data persistency can be managed in a transactional manner with Aurora Persistent Object Model (POM) Kit (see Table 1). Aurora SOA baseline surpasses the bottlenecks of using Web Services in enterprise applications but providing almost the similar benefits. In the case of actual Web Service interoperability, Aurora provides automatic transformations from Aurora services to Web Services forth and back. This way, any Aurora business service or business process can take place in the orchestration of Web Services for intra process management.
1436
Dynamic Process Management with Domain Specific Kits Dynamic process management strategy with Domain Specific Kits identifies primary and supporting processes pertaining to a domain, isolate them from each other for every domain, model them in a domain specific way relying on a service-oriented reference architecture, compose the individualized processes in a process composition model based on process choreography, and provide the automation support for supporting and organization processes. The underlying conceptual model of dynamic process management strategy based on Domain Specific Kits has been given in Figure 6. In this strategy, Domain Specific Kits play a key role for the separation of concerns for dynamic process management. The “reference architecture” underlies the composition of Domain Specific Kits (Engines) as well as constrains the design of “Process Composition Model”. Process Composition Model is based on choreography and executed by a common choreography engine. “Modeling Environment” utilizes the constraints of “Reference Architecture” in providing toolsets for the specification of Domain Specific Artifacts expressed in a Domain Specific Language as well as the specification of process composition in a choreography language. “Software Process Automation” manages them all, where repositories keep the different versions of such specifications. Software Process Automation plays again a key role here to manage the “battlefield processes” and complying with different software process management standards such as CMMI and SPICE (Cetin et al. at EuroSPI2, 2006). The DSK approach enables the process designers to link processes to each other and to other relevant artifacts in a very loosely coupled manner. Both process definitions and the choreography can be specified declaratively and process execution is left to proper Domain Specific Engines and Choreography Engine. Moreover, it
A Domain Specific Strategy for Complex Dynamic Processes
Figure 6. Conceptual model of dynamic process management strategy
also facilitates the production planners to manage the configuration (as well as commonality and variability) issues according to the reference architecture of the Software Factory. The design of Domain Specific Engines and Choreography Engine has different variability management techniques ranging from parameterization to bytecode injection at low level, which facilitate the process designers to easily deal with process customization and integration at higher levels.
Case Study: Central Operations Management in a Bank The strategy presented here has been already put into action for the implementation of MEROPS (Central Operations Management) of a mid-scale bank in Turkey with 250 branches and 1.5 million customers. MEROPS is a specialized unit in the bank responsible for the execution of banking operations, which usually require higher-level competence on behalf of the branches. Performing the operations centrally requires the management of documents and associated workflows between MEROPS and 250 branches. It is a very
dynamic business environment to cope with the ever-changing business requirements of 2500 bankers every day. A simplified view of MEROPS organization and business flow has been presented in Figure 7. MEROPS requires dynamic processes from several perspectives. First, the centralized operations have different completion characteristics: some of them such as Electronic Fund Transfer (EFT) orders are executed and completed immediately; on the other hand, the requested operation may be a financial analysis report that usually takes two weeks to complete. Another major issue is the diversity of documents accompanied to different processes and sub-processes. Following the same example, EFT operation requires a simple customer order form whereas financial analysis report requires a set of documents including company balance sheets, tax declarations and sector-specific information. A sample process is given in Figure 8. The third difficulty occurs at the organization of central units. Considering the same example again, EFT order processing unit is a simple one consisting of 4 makers and 1 checker. However, financial report preparation unit has many experts preparing different parts of the report, and
1437
A Domain Specific Strategy for Complex Dynamic Processes
Figure 7. MEROPS high-level process flow
Figure 8. MEROPS main process flow
1438
A Domain Specific Strategy for Complex Dynamic Processes
they may be specialized in different domains. Management of the report pools and approvals requires the execution of several sub-processes within the central unit. The last but the most complicated dynamism needs stem from the centralization strategy of banking operations. It is naturally not a one-shot transformation process, which raises managerial issues for the coexistence of legacy and brand new business processes. Since reengineering of the banking processes has been going on, a single process for a limited set of branches can be decided for centralization at a time. Hence, while the selected branches have been diverted to MEROPS, all other branches should be executing their existing operations locally. In short, both central and non-central business processes might be running across different branches at the same time. In order to respond to the business needs in a timely manner, a new infrastructure has been designed and implemented due to the strategy
given here where business workflows, business rules and business services can be created and modified dynamically. We have devised several Domain Specific Kits to isolate the technical concerns. The corresponding Domain Specific Engines have been employed in the reference architecture (see Figure 9) of the banking infrastructure. The Domain Specific Kits have been listed in Table 1. A simplified view of the reference architecture is depicted in Figure 9. The reference architecture for banking system employs a choreography model that has been designed in compliance with the Web Services Composite Application Framework (WS-CAF, 2003). The choreography model handles the global context management, coordination, and transaction management of all parties (Domain Specific Engines). The common context includes the session identifier, session context, security attributes, transaction context, protocol specific attributes, and business domain specific identifiers for banking. The interaction
Figure 9. Reference architecture for banking system
1439
A Domain Specific Strategy for Complex Dynamic Processes
model of composition relies on SOA as a paradigm for managing resources, describing process steps, and capturing interactions between an artifact and its environment. The composition model of Domain Specific Artifacts has been driven by extending the Process Description Language (jPDL) of the Business Processes Kit (see Table 1) due to the requirements of the bank. jPDL is an XML-based language that can express business processes and service interactions declaratively. Business Processes Kit has extended the jPDL to include Aurora service calls and transactional support. Such an extension includes the nodes for time and amount controls, task nodes with built-in approval mechanism, integrating process model with banking organization model, conditional service execution nodes where multiple services can be chained by linking their inputs and outputs. Those extensions enable the modeling of dynamic MEROPS processes. After the bank has started using the implementation, the primary processes of both parties have been totally separated. What IT expects from the business turned out to be the upper level process specifications modeled by using Domain Specific Toolsets (business processes, business rules, business services, and the user interfaces all given in declarative form) and specified in proper Domain Specific Languages. In fact, since the toolsets are automatically generating the artifacts in the so-called Domain Specific Languages; IT department has not expected anymore that business people should be DSL-savvy. The specifications are put into a common repository with versions, IT experts check in to that repository, and collect the related ones. Repository management tools also monitor this process so that non-repudiation issues between IT and business departments have been resolved automatically. The face-to-face communication between IT and business departments has been limited. The only communication required at battlefield area left to business analysts who are the members
1440
of the IT department. The business analysts are also expected to resolve the misunderstandings and conflicts through prototyping and pilot sessions by using the same toolset. They also act as “client advocates” to meet the specific process requirements of the business departments. This approach has reduced the process development cycles, limited the conflicts, dramatically lowered the ratio of bugs, and improved IT service and product quality.
Future Trends Information systems and complex product structures are changing the world that we live in. Ubiquitous computing requires the context awareness and Service Level Agreements at every detail, which degrades the dynamism of primary and supporting processes. Form the other hand; high performance computing needs necessitate the use of backend service grids and even in a virtualized way. The virtualization in serviceoriented computing draws new requirements for “dynamic SOA governance” and “autonomous services”. The Web content is growing dramatically every day. The way to provide attractive content is not possible without mashing up the brand new content with existing ones. Web 2.0 trends challenge every organization to change the way to construct software products and information systems. Maybe, the biggest challenge will be the integration of legacy content with the new one. In order to overcome the existing and future challenges of service-oriented computing, people are searching for ways ranging from Model-Driven Development to Aspect-Oriented Software Development. Another research area is providing “formal modeling techniques” for business processes to enable the “complete coverage” and “formal verifications” as well. The last but not least trend is the widespread use of mobiles and rich clients for business
A Domain Specific Strategy for Complex Dynamic Processes
processes. Emerging business fields such as customer relationship management and sales force automation come with their own challenges in dynamic process management. Such challenges are exemplified as dynamic routing in line with location-based services and integrated process management in business merges and acquisitions.
Conclusion This chapter has provided the challenges of dynamic process management from an industrial perspective and introduced a strategy to meet these challenges. Inspired by the Software Factory Automation vision, this strategy uses Domain Specific Kit abstraction to separate the concerns of primary and supporting actors in Domain Specific Kits and then compose them through a Choreography Model including the choreography engine and choreography language. In the real-life case study, we have used seven basic Domain Specific Kits given in Table 1. This is our observation that even with this restricted set of Domain Specific Kits, we can quite effectively segregate the complex processes from each other and manage them individually. After setting up this strategy, we could manage to deploy new business processes very efficiently such that the business people (bankers in our case) could manage to create, enact, and execute business processes without demanding to IT personnel as long as there is no technical barriers such as extending the capabilities of Domain Specific Kits or composing the Domain Specific Artifacts much more conveniently. The proposed strategy helped us to overcome the challenges given in the introduction as follows: we refrain from using generic approaches and techniques for process modeling by partitioning the big picture into domain specific parts. We have dedicated a Domain Specific Kit for each partition and aligned the primary and supporting processes accordingly. This strategy
enabled us to integrate external processes and services quite effectively. For example, Financial Gateways Family System has been designed with such a vision, which integrated more than 30 external business processes and services to a banking system. The second challenge was the lack of architectural coverage of existing approaches. The proposed strategy here treats the architectural coverage as a first class entity. First, it uses the service-oriented reference architecture (Aurora) to instantiate the software factories. Second, every Domain Specific Kit should have a dedicated Domain Specific Engine that has to be designed and implemented by taking the functional as well as non-functional requirements of that domain. This can resolve the orthogonal complexities of architectural aspects crosscutting the process management. The third drawback was the lack of support for integration of the legacy processes and coexistence of new processes with the legacy ones. The proposed strategy has introduced an integration model using service mashups with Domain Specific Kits, where front-ends of legacy applications can be integrated with screen scrapper Domain Specific Kits and back-ends with B2B wrapper Domain Specific Kits. Another difficulty was having dynamism not only in primary processes, but also in supporting and organizational processes as well. The proposed strategy isolates every process (not only primary processes) of a particular domain from the rest of the picture. This facilitates the software process managers to define and manage the activities of software artifacts implementation. Moreover, the feedback from primary processes helps to improve the Domain Specific Engines and Domain Specific Language constructs for more dynamic process definitions, hence enriches the common business vocabulary. The last challenge was the way to follow heavyweight setup procedures of service and process execution environments. Since the pro-
1441
A Domain Specific Strategy for Complex Dynamic Processes
posed strategy uses a software factory approach, it has been already armed with charted roadmap for instantiating the reference architecture. The reference architecture model provided by Aurora as well as the dedicated Domain Specific Kits such as Business Processes Kit and Business Rules Kit help establish the execution infrastructures for business services and processes effectively and to achieve reliability by employing the existing kits.
References Aktas, Z., & Cetin, S. (2006). We envisage the next big thing. IDPT-2006 Integrated Design and Process Technology Conference (pp. 404-411). San Diego, California. Altintas, N. I. (2007). Feature-based software asset modeling with domain specific kits. Doctoral dissertation, Middle East Technical University, Department of Computer Engineering. Ankara, Turkey. Retrieved from http://etd.lib.metu.edu. tr/upload/12608682/index.pdf Altintas, N. I., & Cetin, S. (2005). Integrating a software product line with rule-based business process modeling. TEAA-2005 Trends in Enterprise Application Architecture Workshop, VLDB-2005 Conference (LNCS 3888, pp. 15-28). Altintas, N. I., & Cetin, S. (2008). Managing large scale reuse across multiple software product lines. ICSR-2008 10th International Conference on Software Reuse (LNCS 5030, pp. 166-177). Beijing, China. Altintas, N. I., Cetin, S., & Dogru, A. (2007). Industrializing software development: The “factory automation” way. TEAA-2006 2nd Trends in Enterprise Application Architecture Conference (LNCS 4473, pp. 54-68).
1442
Altintas, N. I., Cetin, S., & Surav, M. (2008). OCTOPODA: Building financial gateways family system using domain specific kits. ICONS-2008 Third International Conference on Systems (pp. 85-92). Cancun, Mexico. DOI 10.1109/ICONS.2008.57 Ambler, S. W. (1998). Process patterns. Cambridge University Press/SIGS Books. BPEL. (2004). BPEL: Business process execution language. Retrieved March 30, 2008, from http://bpel.xml.org/. BPML. (2002). BPML: Business Process Modeling Language. Retrieved April 12, 2008, from http://www.ebpml.org/bpml.htm. BPMN. (2004). BPMN: Business Process Modeling Notation. Retrieved March 12, 2008, from http://www.bpmn.org/. Brooks, F. P. (1987). No silver bullet: Essence and accidents of software engineering. Computer, 20(4), 10–19. doi:10.1109/MC.1987.1663532 Burmeister, B., Steiert, H. P., Bauer, T., & Baumgärtel, H. (2006). Agile processes through goal- and context-oriented business process modeling. DPM-2006 Dynamic Process Management Workshop at Business Process Management 2006 Conference (LNCS 4103, pp. 217-228). Cetin, S., & Altintas, N. I. (2008). An integration model for domain specific kits. IDPT-2008 Integrated Design and Process Technology Conference. Cetin, S., Altintas, N. I., Oguztuzun, H., Dogru, A., Tufekci, O., & Suloglu, S. (2007). A mashupbased strategy for migration to service-oriented computing. ICPS’07 IEEE International Conference on Pervasive Services (pp. 169-172). Istanbul, Turkey.
A Domain Specific Strategy for Complex Dynamic Processes
Cetin, S., Altintas, N. I., Oguztuzun, H., Dogru, A., Tufekci, O., & Suloglu, S. (2007). Legacy migration to service-oriented computing with mashups. ICSEA’07 International Conference on Software Engineering Advances (pp. 21-30), Cap Esterel, France. DOI 10.1109/ICSEA.2007.49
Dou, W., Chueng, S. C., Chen, G., Wang, J., & Cai, S. J. (2005). A hybrid workflow paradigm for integrating self-managing domain-specific applications. GCC-2005 4th International Conference on Grid and Cooperative Computing (LNCS 3795, pp. 1084-1095). Beijing, China.
Cetin, S., Altintas, N. I., & Sener, C. (2006). An architectural modeling approach with symmetric alignment of multiple concern spaces. ICSEA’06 International Conference on Software Engineering Advances (pp. 48-57), Tahiti, French Polynesia. DOI 10.1109/ICSEA.2006.261304
Dragos, A. M., & Johnson, R. E. (1998). A proposal for a common infrastructure for process and product models. In Proc. OOPSLA Mid-year Workshop on Applied Object Technology for Implementing Lifecycle Process and Product Models (pp. 81-82). Denver, Colorado.
Cetin, S., Altintas, N. I., & Solmaz, R. (2006). Business rules segregation for dynamic process management with an aspect-oriented framework. DPM-2006 Dynamic Process Management Workshop at Business Process Management 2006 Conference (LNCS 4103, pp. 191-202).
Fowler, M. (2002). Patterns of enterprise application architecture. Addison-Wesley.
Cetin, S., Altintas, N. I., & Tufekci, O. (2006). Improving model reuse with domain specific kits. MoRSe’06 Model Reuse Strategies Workshop (pp. 13-16). Warsaw, Poland. Cetin, S., Tufekci, O., Buyukkagnici, B., & Karakoc, E. (2006). Lighthouse: An experimental hyperframe for multi-model software process improvement. EuroSPI2-2006 European software process ımprovement and ınnovation conference. Joensuu, Finland. Cibran, M. A., & D’Hondt, M. (2006). High-level specification of business rules and their crosscutting connections. Aspect-Oriented Workshop at AOSD-2006, Bonn, Germany. Date, C. J. (2000). What not how: The business rules approach to application development. Addison Wesley Longman Inc.
Greenfield, J., Short, K., Cook, S., & Kent, S. (2204). Software factories: Assembling applications with patterns, models, frameworks, and tools. Wiley. Griss, M. L., & Wentzel, K. (1994). Hybrid domain specific kits for a flexible software factory. In Proceedings of the Annual ACM Symposium, Applied Computing (pp. 47-52). Phoenix, Arizona. Harrison, W. H., Ossher, H. L. & Tarr, P. L. (2002). Asymmetrically vs. symmetrically organized paradigms for software composition. IBM Research Division, Thomas J. Watson Research Center, RC22685 (W0212-147). Havey, M. (2005). Essential business process modeling. O’Reilly. Keller, A., & Ludwig, H. (2002). Defining and monitoring service level agreements for dynamic e-business. In Proceedings of LISA ’02: Sixteenth Systems Administration Conference (pp. 189204). Philadelphia, PA, USA.
1443
A Domain Specific Strategy for Complex Dynamic Processes
Kumaran, S., Bishop, P., Chao, T., Dhoolia, P., Jain, P., & Jaluka, R. (2007). Using a modeldriven transformational approach and serviceoriented architecture for service delivery management. IBM Systems Journal, 46(3), 513–529. Küster, J. M., Koehler, J., & Ryndina, K. (2006). Improving business process models with reference models in business-driven development. BPD-2006 Business Process Design Workshop at Business Process Management 2006 Conference (LNCS 4103, pp. 35-44), Vienna, Austria. Pesic, M., & van der Aalst, W. M. P. (2006). A declarative approach for flexible business processes management. DPM-2006 Dynamic Process Management Workshop at Business Process Management 2006 Conference (LNCS 4103, pp. 169-180), Vienna, Austria. Rule, M. L. (2004). RuleML: Rule markeup language. Retrieved April 15, 2008, from http:// www.ruleml.org Russell, N., ter Hofstede, A. H. M., van der Aalst, W. M. P., & Mulyar, N. (2006). Workflow control-flow patterns: A revised view. BPM Center Report BPM-06-22. Retrieved from http:// www.BPMcenter.org Scacchi, W. (1998). Modeling, ıntegrating, and enacting complex organizational processes. In K. Carley, L. Gasser, & M. Prietula (Eds.), Simulating organizations: Computational models of ınstitutions and groups (pp. 153-168). MIT Press. SEI (2005). SMART: The service-oriented migration and reuse technique (CMU/SEI-2005TN-029). Shi, X., Han, W., Huang, Y., & Li, Y. (2005). Service-oriented business solution development driven by process model. CIT-2005 Fifth International Conference on Computer and Information Technology (pp. 1086-1092). Shanghai, China.
1444
Sneed, H. M. (2000). Encapsulation of legacy software: A Technique for reusing legacy software components. Annals of Software Engineering, 9, 293–313. doi:10.1023/A:1018989111417 Spring. (2003). The Spring Framework. Retrieved April 19, 2008, from http://www.springframework.org/. Tombros, D. (1999). An event- and repositorybased component framework for workflow system architecture. Doctoral dissertation, University of Zurich. Retrieved February 22, 2008, from http://www.ifi.uzh.ch/archive/diss/Jahr_1999/ thesis_tombros.pdf. Tufekci, O., Cetin, S., & Altintas, N. I. (2006). How to process [business] processes. IDPT-2006 Integrated Design & Process Technology Conference (pp. 624-631). San Diego, California. van der Aalst, W. M. P., ter Hofstede, A. H. M., Kiepuszewski, B., & Barros, A. P. (2003). Workflow Patterns. Distributed and Parallel Databases, 14(3), 5–51. doi:10.1023/A:1022883727209 van der Aalst, W. M. P., & van Hee, K. M. (2002). Workflow management: Models, methods, and systems. Cambridge, MA: MIT Press. Wild, W., Wirthenson, R., & Weber, B. (2006). Dynamic engines - A flexible approach to the extension of legacy code and process-oriented application development. WETICE-2006 Proceedings of the 15th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises (pp. 279-284). Manchester, UK Winograd, T., & Flores, F. (1986). Understanding computers and cognition: A new foundation for design. Addison-Wesley, Norwood, NJ. WS-BPEL 2.0. (2007). WS-BPEL 2.0: Web services for business process execution language. Retrieved April 25, 2008, from http://docs.oasisopen.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.html.
A Domain Specific Strategy for Complex Dynamic Processes
WS-CAF. (2003). Web services composite application framework (WS-CAF). Retrieved July 28, 2007, from http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=ws-caf Yeh, R. T. (1991). System development as a wicked problem. International Journal of Software Engineering and Knowledge Engineering, 1(2), 117–130. doi:10.1142/S0218194091000123 Yoder, W., & Razavi, R. (2000). Adaptive object-models, Conference on Object-Oriented Programming Systems Languages and Applications (pp. 81-82). Minneapolis, Minnesota. Ziemann, J., Leyking, K., Kahl, T., & Dirk, W. (2006). Enterprise model driven migration from legacy to SOA. Software Reengineering and Services Workshop.
Key Terms and definitions Domain Specific Kit: is the composite of a “Domain Specific Language” to specify software artifacts pertaining to that domain; a “Domain Specific Engine” to interpret and execute a Domain Specific Language; and a “Domain Specific Toolset” to design, develop, and manage software artifacts of a dedicated Domain Specific Language. An example to a Domain Specific Kit is the “Relational Database Management System” that includes SQL as the Domain Specific Language, Database Engine as the Domain Specific Engine, and iSQL Tools as the Domain Specific Toolset. Domain Specific Process: is the set of activities dedicated to construct the software artifacts in a particular domain. Domain Specific Processes are mainly discrete and isolated from each other in software construction. The best way to integrate them is through well-defined and unified process choreographies. In relational database management domain; database management system installation process, database schema
creation and maintenance processes, and database backup-restore processes are all known as Domain Specific Processes. Primary Process: represents the activities constituting the major purpose of existence of an enterprise, which figure out services the organization is established for. They are also known as “business domain processes”, which carries the utmost value for an organization. Credit Card Management, Loan Management, Account Management are all “primary processes” in the banking domain. Supporting Process: is performed to maintain integrity of the product or service developed by “primary processes” as well as it ensures that products and processes comply with predefined provisions and plans. Supporting processes accompany the “primary processes”, which do not typically result in final products of the organization, but rather indirectly contributes to the value added. Documentation, configuration management, verification, training and audit process are all supporting processes. Organizational Process: includes activities that establish the business goals of the organization and develop process, product and resource assets which, when used will help to achieve business goals. Managerial processes, resource and infrastructure processes are all in organizational process category. Software Factory Automation: is the vision to set up software factories as the way other industries have been establishing the industrial factories for manufacturing. It requires different know-how, expertise, infrastructures and processes to put forward the integrated software construction environments. Software Factory Automation analyzes the functional and nonfunctional requirements of a product domain, establishes the product line, provides core assets, and preliminary processes. It leverages the actual software construction in many ways including dynamic process management and time-to-market issues.
1445
A Domain Specific Strategy for Complex Dynamic Processes
Software Product Line Engineering: is the combination of life cycle management as well as architectural setup of a software product line. A Software Product Line is a set of softwareintensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way. Software Product Line Engineering can enable rapid market entry and flexible response, and provide a capability for mass customization of a software product.
Endnote 1
)>>
The concepts given here have been researched mainly within the context of OCTOPODA Project that is partially supported by Technology and Innovation Funding Programs Directorate (TEYDEB) of The Scientific and Technological Research Council of Turkey (TUBITAK) (ProjectNo/ Date: 3060543 and 01.09.2006).
This work was previously published in Handbook of Research on Complex Dynamic Process Management: Techniques for Adaptability in Turbulent Environments, edited by Minhong Wang and Zhaohao Sun, pp. 430-455, copyright 2010 by Information Science Reference (an imprint of IGI Global).
1446
1447
Chapter 6.2
Measuring the Impact of an ERP Project at SMEs: A Framework and Empirical Investigation Maria Argyropoulou Brunel University, UK George Ioannou Athens University of Economics and Business, Greece Dimitrios N. Koufopoulos Brunel University, UK Jaideep Motwani Grand Valley State University, USA
Abstract This article analyses and tests a novel framework for the evaluation of an ERP project. The framework incorporates specific performance measures, which are linked to a previously developed model, (the ‘six-imperatives’ framework) and are relevant to ERP implementation Two case studies illustrate the use of the framework in two Greek companies aiming to measure, in practical terms, the impact of the ERP project on their operations. The main results indicate that the “six-imperatives” provide a comprehensive methodology based on the pro-
found exploration and understanding of specific business processes and objectives that should be met in order to assess an ERP project.
Introduction An Enterprise Resource Planning (ERP) system is an integrated enterprise information system to automate the flow of material, information, and financial resources among all functions within an enterprise on a common database. ERP systems are meant to replace the old systems usually
Copyright © 2011, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Measuring the Impact of an ERP Project at SMEs
referred to as ‘legacy systems’ in order to help organizations integrate their information flow and business processes (Abdinnour-Helm et Al., 2003). ERP provides two major benefits that do not exist in non-integrated departmental systems: (1) a unified enterprise view of the business that encompasses all functions and departments; and (2) an enterprise database where all business transactions are entered, recorded, processed, monitored, and reported. This unified view increases the requirement for, and the extent of, interdepartmental cooperation and coordination. Moreover, it enables companies to achieve their objectives of increased communication and responsiveness to all stakeholders (Dillon, 1999). This is the most important point raised when it comes to the ERP systems and integration of information flows and business processes as they can support information sharing along company value chain and help in the achievement of operating efficiency (Law & Ngai 2007). After over a decade of applications, the implementation of ERP systems is still considered a complex project with many problems concerning budgets and expected benefits. Given the possibility for success and failures, it is reasonable to expect that organisations should be able to assess the implications of the ERP adoption on their overall performance. Until recently most ERP researchers and practitioners generally talk about ERP critical success factors (CSFs) and implementation models based on CSFs that address key implementation issues (Holland and Light, 1999; Motwani et al., 2005; Umble et al., 2003). Top management involvement, business plans, vision, vendor support, change readiness, teamwork, team composition, and communication were found to be critical factors to ensure a smooth introduction for successful ERP implementation (Ramayah et al., 2007). It seems that very few multi-disciplinary studies have been conducted in an attempt to conclude on the impact of ERP system on organisational performance. Moreover,
1448
the topic of assessing the benefits of ERP systems has not been fully addressed mainly because the justification process is a major concern for organisations investing in IT, and managers are unable to evaluate the holistic implications of adopting new technology, both in terms of benefits and costs (Gunasekaran et al., 2006). Questions like how do we define a successful ERP project have not yet been answered. Motivated by the ERP benefits concept, in this article we suggest that ERP success is achieved when the organisation is able to better perform all its operations and when the integrated information system can support the performance increase of the company. This comprises of the benefits that the company has reaped from the implementation of the ERP or the achievement of ERP objectives. This is often called benefits management (Willcocks, 1994) and is defined as ‘the process of organising and managing such that potential benefits arising from IT are actually realised’. According to Coleman and Jamieson (1994) benefits management encourages manager to focus on exactly how they will make the system pay off and contribute to the business objectives. Based on our recently developed methodology called the “six-imperatives” methodology (Argyropoulou et al., 2008a; 2008b) for ERP system evaluation, we measured the impact of the ERP implementation project on two Greek SMEs.
ERP System Implementation and SMEs Existing literature suggest that SMEs may be differentiated from larger enterprises by a number of key characteristics such as personalised management, severe resource limitations, flat and flexible structures etc. (Berry, 1998; Burns & Dewhurst, 1996; Huin, 2004; Marri et al., 1998). Another major characteristic of SMEs is the absence of proper and formal IS practices and skills. In the present era of globalisation, it is obvious that the survival
Measuring the Impact of an ERP Project at SMEs
of SMEs will be determined by their ability to understand and acknowledge the importance of IS and their accessibility to the right information at the right time (Sharma & Bhagwat, 2006). Thus far, ERP adoption has been attempted by larger organizations, thus consulting and implementation methodologies are specified for their operations. Following a study in 150 Italian SMEs, Morabito at al., (2005) concluded that the risks associated with investments in ERP software are many and resellers should be able to offer end-to-end business management solutions addressing specific requirements before and after implementation of the solution. According to Sun et al., (2005) few SMEs have the resources to adequately address every Critical Success Factor as they should and they are forced to make implementation compromises according to resource constraints, subsequently putting the success of their ERP project at risk. Thus to improve the increased use of ERP systems and the overall business performance, necessitates suitable ERP project management strategy. With limited financial resources and in most cases insufficient managerial and technical skills SMEs need a simple and comprehensive methodology to evaluate the ERP implementation throughout the whole ERP life cycle. Although there is no company to accept or justify an unsuccessful investment, it is obvious that large companies can afford the damages better than SMEs as they have more resources available. It is imperative for smaller companies to embrace ERP projects with great care (Argyropoulou et al., 2007; 2008a). The organisation of the article is as follows: Section 2 reviews the related literature on performance measures. Section 3 discusses the theoretical background of the recommended framework, as well as the methodological deployment describing the proposed performance metrics. Then, the findings are discussed and the article finishes with a discussion, limitations, and future research directions.
Literature Review This work draws on three areas: a) Supply Chain performance measures, b) IS performance measures and c) ERP performance measures. The next paragraphs discuss briefly these areas as they apply to our analysis.
Supply Chain Performance Measures The literature review on the area of supply chain performance measures revealed three important frameworks: a) the Supply Chain Operations Reference (SCOR) Model, developed by the Supply Chain Council (SCC) in 1996 and b) the Oliver Wight ABCD checklist and c) the Balanced Scorecard for SCM evaluation (Bhagwat & Sharma, 2007). The SCOR model provides a common process oriented language for communicating among supply-chain partners in the following decision areas: plan, source, make and deliver. It contains 12 metrics which fall into four categories: a) delivery reliability metrics, b) flexibility and Responsiveness metrics, c cost metrics and d) assets metrics: The Oliver Wight ABCD 20 point checklist, on the other hand, is a guide used by manufacturing professionals to improve their company’s performance. It addresses the following business areas: strategic planning, people and team systems, product development, continuous improvement, planning, and control. Both frameworks do not cover every area of ERP performance, but they give an indication about the company’s overall business performance. In a recent study Bhagwat and Sharma (2007) propose a balanced approach for SCM performance evaluation. The authors used the Balanced Scorecard (Kaplan & Norton, 1992) four perspectives and developed a new framework structurally similar to SBSC with corresponding metrics that reflect SCM strategy and goals.
1449
Measuring the Impact of an ERP Project at SMEs
IS Performance Measures Many researchers focused on measuring IS performance (Cha-Jan, Chang, & King, 2005; DeLone & McLean, 1992; 2003; Jiang & Klein, 1999; Mirani & Lederer 1998; Pitt et al., 1995; Saarinen, 1996; Sharma & Bhagwat, 2006; Torkzadeh & Doll, 1999) All these studies have substantially contributed, in that they advocate the importance of performance measurement for the improvement of business activities and they identified a number of metrics for the IS function. However, most of them have developed and tested general survey instruments, which measure IS performance without focusing on a specific system being evaluated and its purpose.
ERP Systems Performance Measures The review of recent literature covering the particular area of ERP systems impact on overall performance allowed us to identify the following significant papers: a) Wieder et al., (2006) who conducted a field study to find the impacts of several aspects of ERP adoption using IT measures, business process performance measures and firm performance and b) Chand et al., (2005) who carried out a case study research where a balanced -scorecard based framework for valuing the strategic contributions of an ERP system was applied. Motivated by these two researchers but mainly by the Information Systems Functional Scorecard (Cha-Jan, Chang, & King, 2005), last year we introduced a new framework called the “six-imperatives” framework which provides a solid methodology for the identification and incorporation of the necessary metrics for ERP system post implementation review (Argyropoulou et al., 2008b).The purpose of this article is to apply the ‘six-imperatives’ framework on two Greek SMEs that were aiming at the assessment of their ERP implementation project. The next section describes briefly the theoretical background as well as the
1450
methodological deployment of the “six-imperatives” framework, for the readers to comprehend the rational of the analysis that follows.
The “Six-Imperatives” Framework for Measuring the Impact of an ERP Project The theoretical background of the framework is based mainly on two previously developed methodologies: The Information Systems Functional Scorecard developed by Cha-Jan Chang & King, (2005) and the ‘six imperatives’ framework for ERP system selection and evaluation developed by Argyropoulou et al., (2008a; 2008b). Both frameworks are briefly described in the remainder of this section (detailed theoretical information in Argyropoulou et al., 2008a; 2008b).
The (ISFS) Information Systems Functional Scorecard The Cha-Jan Chang and King (2005) scorecard is based on the models suggested by Pitt et al., (1995) and Delone & McLean (2002) and has been designed to measure the performance of the entire IS function. The authors developed an instrument consisting of three major dimensions: systems performance, information effectiveness and service performance. These dimensions constituted the basic constructs for their field study research. These are briefly discussed in the next paragraph, whereas the ISFS sub-constructs are presented in table 1 (For detailed discussion, see Cha-Jan Chang and King, 2005). Systems performance: Measures of the systems performance assess the quality aspects of the system such as reliability, response time, ease of use, and so on, and the various impacts that the systems have on the user’s work. Information effectiveness: Measures of the information effectiveness assess the quality of information in terms of the design, operation, use,
Measuring the Impact of an ERP Project at SMEs
Table 1. Sub-ISFS Constructs, adapted from Cha-Jan Chang and King, (2005) Systems performance
Information effectiveness
Service performance
Impact on job
Intrinsic quality of information
Responsiveness
Impact on external constituencies
Contextual quality of information
Reliability
Impact on internal processes
Presentation quality of information
Service provider quality
Effect on knowledge and learning
Accessibility of information
Empathy
Systems features
Reliability of information
Training
Ease of use
Flexibility of information
Flexibility of services
Usefulness of information
Cost/benefit of services
and value provided by information as well as the effects of the information on the user’s job. Service performance: Measures of service performance assess each user’s experience with the services provided by the IS function in terms of quality and flexibility.
The Six Imperatives Framework For ERP System Implementation in SMEs According to figure 1, the ERP system’s evaluation should commence well before implementation, during the selection process, where necessary ERP objectives are identified, and determined (Argyropoulou et al., 2008a). Any SME that wishes to implement an appropriate ERP system should primarily conduct an in depth analysis of their strategy and needs based on six imperatives which represent the building pillars on which the investigation is actually performed and are critical for implementation and operative success. These are analysed briefly in the following paragraph, to draw attention to the main activities, issues, dynamics, and complexities involved in the ERP project implementation cycle (detailed analysis in Argyropoulou et al., 2007; 2008a). The strategy analysis imperative implies that companies should look for ERP implementation options and objectives concerning the package’s alignment with the corporate strategy and com-
petitive priorities such as expansion, alliances, or other competitive priorities. The investment concerns imperative emphasizes the importance of investment justification in an ERP project and the two associated objectives are: a) adherence to time schedule, and b) contingency budgetary planning which can reduce the issues relating to increased financial expenses that may label the project as a ‘black hole’ or failure. The process assessment imperative suggests that the ease of customisation and systems functionality should constitute the basic ERP objectives because a better fit between the packaged software functionality and user organisation requirements leads to successful implementation and usage. The user needs identification imperative recommends that the future system should be designed to support business processes based on the needs and capabilities of the current and future users. The ERP objectives stemming from this imperative are the development and communication of an easy system, which fulfils a range of specific user needs, and expectations that will be covered under the term ‘usefulness’ of the system. The technology requirements imperative considers the technical characteristics as well as the system’s integration with other information technologies within the supply chain. Moreover,
1451
Measuring the Impact of an ERP Project at SMEs
Figure 1. The ‘six imperatives’ framework for the assessment of an ERP project (adapted from Argyropoulou et al, 2008b)
it implies that the ERP must produce the required information when needed, and that control over both the information and the information systems is maintained. The vendor features imperative commands an effective supplier relationship management (SRM) by implementing specific supplier selection practices An organizational analysis based on the these imperatives leads to a number of ERP objectives to be achieved, which integrate all the expected benefits pursued in order to derive value from the new system (Figure 1). The specific objectives, that are determined pre-project, can serve as a guidance tool throughout the whole ERP life cycle and especially for post-implementation review (Argyropoulou et al., 2008b; Nicolaou, 2004). Combining the Sub-ISFS Constructs (see table 1), and the ‘six imperatives’ framework,
1452
Argyropoulou et al., (2008b) provided a set of performance metrics which can actually measure to which extent the pre-determined ERP objectives have been achieved (see Appendix 1). The strategy analysis and the investment concerns measures were drawn from the recent literature (i.e Hunton et al., 2003; Irani et al., 1997, 2002; Poston & Grabski and the SCOR model as it was used by Weider et al., 2006).The remaining metrics were based on the ISFS sub constructs. At this point it should be mentioned that any organisation might choose amongst a number of metrics available in the literature. The important issue is their direct relationship to the ERP implementation objectives and the degree to which the latter have been achieved.
Measuring the Impact of an ERP Project at SMEs
Testing the ‘Six Imperatives’ Framework This section describes a practical example of the use of the ‘six –imperatives’ framework in two Greek SMEs. The ‘action research’ strategy has been followed as the whole project focused on action, promoting change in the company (Cunningham, 1995; Marsick & Watkins, 1997) and involved academics with a genuine concern for the expected findings (Eden & Huxham, 1996: 75).
Data Collection We contacted two pharmaceuticals: Company A with 100 employees, had successfully applied the “six-imperatives” analysis prior to ERP selection and adoption. They implemented a customised Oracle ERP system. Company B, with 200 employees, had implemented SAP without any prior analysis /reengineering of their processes. The measures from our framework in Table 2 were provided to the IT managers one week prior to the scheduled interview allowing them to look for answers not readily available. Finally, some modifications were incorporated to include particularities of the pharmaceutical sector. In the following paragraph, we discuss the performance metrics that were selected for application in the two case organisations (table 2).
Findings The results from both cases are presented in Table 2 whereas a brief explanation is discussed in the remaining of the section. Comments on financial metrics: The reduction of COGS/revenues in both cases was due to a reduction in production costs. For Company A., labour hours and raw material costs were significantly reduced due to a more effective production schedule that followed the ERP (MRP in particular) implementation. Company B reported much lower reduction of the ratio COGS/revenues
which according to their managers was within their strategic objectives. The inventory holding cost remained the same in both companies as pharmaceuticals are obliged by legislation to hold 2 months stock. Comments on marketing metrics: Company A reported that the percentage of missed orders/ total orders reduced from 4% before ERP to 2% after ERP adoption because of the specific marketing /sales module implementation. Company B explained that this had always been low due to a specific CRM software they had been using and they did not really witness any significant improvement following SAP adoption. The reduction of delivery time is not applicable in pharmaceuticals as they are obliged to keep stock. Extraordinary outbursts like flew might affect the delivery time but these are not incorporated in the production schedule. In such cases both companies resorted to imports from abroad. Comments on investment metrics: Company A explained that they replaced their legacy systems almost immediately, following a two months period of training. The preparatory needs analysis based on the ‘six-imperatives’ guidance had already guaranteed a smooth adoption of a suitable software. Moreover, the whole project finished within budget. Indirect human costs exceeded budgets by 10% due to additional training and motivation schemes. Indirect organisational costs overruns were not reported. Company B reported a three years’ implementation project with parallel use of the previous systems and 25% overruns mainly due to unplanned configuration. Indirect human costs exceeded budgets by 15% for additional technical support and expert advice. Indirect organisational costs incurred and over exceeded budgets by 15% due to the organisational restructuring that followed ERP adoption. Comments on process metrics: Both companies explained that manufacturing cost is considered confidential information and that the whole production process is subject to strict legislation. However, Company A explained that
1453
Measuring the Impact of an ERP Project at SMEs
Table 2. Actual performance measures Imperative
Measure
COMPANY A
COMPANY B
% reduction of COGS/revenues
15%
4%
Strategy analysis Financial Marketing
% reduction of missed orders/total orders
50%
15%
% reduction of delivery time
N/A
N/A
Investment concerns Time-based
Processing time along critical path
6 months
3 years
Cost-based
Direct Project Costs
Within budget
25% overruns
Indirect Human Costs
10%
15%
Indirect Organisational Costs
-
15%
Reduction in order fulfilment time
41%
N/A
Reduction of order cost
22%
5%
% of errors per function/department
0.1%
0
Process assessment Impact on inter nal processes User needs Impact on job Ease of use Impact on knowledge/ learning
Number of training hours per user
90 days
120 days
Number of consultant days per user
in-house
60 hours
productivity
40% increase
5% reduction
Technology requirements Systems reliability
Number of restoration hours
N/A
N/A
Number of maintenance hours
N/A
N/A
Number of hours that the information was not received in timely manner.
never
rarely
Accuracy
Number of errors in reports
Limited
Limited
Flexibility
Number of hours for parameterization- customization
4 man months
20 man months
Response time and maintenance fees
N/A
N/A
Information effectives Accessibility
Vendor Features Service Responsiveness
Note: COGS: Cost of goods sold
the time for order fulfilment was reduced from 17 minutes to 10 minutes, whereas the order cost reduced to 470.000 euros from 600.000 before ERP implementation (attributed mainly to a reduction in raw materials, warehousing and distribution costs). Company B reported that they had always performed efficiently due to previous warehouse software and automated picking and packing that they had been using prior to SAP. Warehousing 1454
and distribution had always been under control. The order cost that was further reduced by 5% was attributed to better inventory management. Comments on users’ metrics: Company A manager reported that labour hours reduced from an average of 2.833/employee to 1850 hours/ employee (the legal working time is about 1800 hours/employee on a yearly basis). The ensuing productivity index was calculated as labour hours/
Measuring the Impact of an ERP Project at SMEs
total orders and decreased from 2,833/10,000 = 0.28 to 1,850/10,000 = 0.185 resulting in 40 % increase in productivity. Company B reported a productivity reduction of 5%. However, they argued that this concerned a sudden increase in administrative costs as they hired technically skilled people to support the new system. Errors in both companies were measured as a percentage of returns. In company A, errors were reduced to 0.1% from 0.4% before ERP adoption. Company B had always had zero defects due to strict Statistical Process Quality Control Schemes. Comments on technology metrics: System’s reliability measures were not available because both companies were bounded by the Service Level Agreement (SLA) they had signed with their IT subcontractors. Information effectiveness measures for both companies were satisfactory. The parameterization-customization period in Company A lasted for 4 man months; this period lasted 20 months in company B which was actually expected due to the installation of very sophisticated software and the lack of any prior reengineering. Comments on vendor metrics: These measures were not available. Nonetheless, both managers were triggered to explore the issue for internal use.
Discussion and practical implications The purpose of this study was to test the framework and to explore whether it could become a useful tool for the actual measurement of the ERP impact on company performance. Both companies confirmed that the methodology was comprehensive albeit simple and straightforward. Nonetheless, taking a stance, we could argue that project A was definitely successful because the company had reaped the benefits that they were actually looking for, focusing on how they could make the system pay off and contribute to the ERP business objectives.
We asked both managers to weigh up their ERP implementation. Company A managers were confident about the success of the ERP venture because they had met to a great extent the ERP objectives that had determined pre-project such as: optimisation of business processes, standardisation of operations flow, establishment of customer-centric policies and better organisational performance. Finally, they commented that the real contribution of the ‘six-imperatives’ framework lies on the learning process and the profound exploration and understanding of specific business processes and goals that had been translated into tailor-made ERP objectives, which, in turn, were largely met. Company B managers explained that they could consider it a successful project since it was finally up and running and the overall results were satisfying. After all, they had always been doing well and the SAP adoption was meant to replace their legacy systems.
Limitations The acceptability of case study research suffers from a lack of ability to generalise the findings. However, based on Yin (1994) and Walsham (1995) arguments that case studies findings can be used to develop the concepts identified from the literature, we believe that our framework needs only to be tested in a wider range of situations to fine-tune the details and draw implications form the data. Nonetheless, our ‘before and after’ analysis is straightforward and can compare and eventually control the factors that are related to ERP implementation and affect its use and performance. All suggested metrics are simple to measure, and can continuously be monitored. In this way, the framework can be easily used in SMEs, offering valuable insight regarding the ERP performance by comparing specific areas of operation before and after implementation. In addition, the ‘before and after’ analysis can be very useful for internal benchmarking.
1455
Measuring the Impact of an ERP Project at SMEs
Conclusion and Future Directions
Berry, M. (1998). Strategic planning in small and high tech companies. Long Range Planning, 32(3), 455-466.
The research reports on a comparative case study of 2 Greek SMEs that implemented an ERP system. Using established approaches in the literature of IS performance measurement, in this article we tested the ‘six-imperatives’ framework for the evaluation of an ERP project. The framework consists of six key measures, each of which contains a set of metrics, expressed in time or monetary units and/or percentages. The holistic approach of the framework is simple and can be used by managers. The suggested metrics can be objective, and can provide SMEs with valuable information in a fast and accurate way, to ensure the comprehensiveness in their decision making which in most cases can strengthen their competitive position. The six imperatives framework for ERP performance measurement contributes to the ERP evaluation literature because it integrates the necessary factors to be considered in this process. Conclusively, this framework illustrates the critical factors/issues that need to be addressed at all three phases of the implementation process: pre-implementation or setting-up phase, implementation, and post-implementation or evaluation phase. Future research might compare the ‘six-imperatives’ framework with the balanced scorecard and/or other performance measurement frameworks. Moreover, it can be further expanded and used as an instrument for field study research.
Burns, P., & Dewhurst, J. (1996). Small Business and Entrepreneurship (2nd ed.). London: Macmillan Press.
References Abdinnour-Helm S., Lengnick-Hall, M.L., & Lengnick-Hall, C.A. (2003). Pre-implementation attitudes and organizational readiness for implementing an Enterprise Resource Planning system. European Journal of Operational Research, 146(2), 258-273.
1456
Argyropoulou, M, Ioannou, G., & Prastacos, G.P. (2007). ERP implementation at SMEs: an initial study of the Greek market. International Journal of Integrated Supply Management, 3(4) 406-425. Argyropoulou, M., Ioannou, G., Soderquist, K.E., & Motwani, J. (2008a). Managing ERP system evaluation and selection in SMEs using the ‘siximperatives’ methodology. IJPM, 1(4), 430-452. Argyropoulou, M., Ioannou, G., Koufopoulos, D., & Motwani, J. (2008b). Performance drivers of ERP systems in small and medium-sized enterprises. International Journal of Enterprise Network Management, 2(3), 333-349. Bhagwat, R., & Sharma, M.K. (2007). Performance measurement of supply chain management: a balanced scorecard approach. Computers and Industrial Engineering, 53, 43-62. Cha-Jan Chang, J., & King, W.R. (2005). Measuring the Performance of Information Systems: A Functional Scorecard. Journal of Management Information Systems, 22(1), 85-115. Chand, D., Hachey, G., Hunton J., Owhoso, V., & Vasudevan, S. (2005). A balanced scorecard besed framework for assessing the strategic impacts of ERP systems. Computers in Industry, 56, 558-572 Colleman, T., & Jamieson, M. (1994). Beyond return of investment. In L. Willocks (Ed.), Information Management: The Evaluation of Information Systems Investments (pp 189-205). Chapman & Hall London.
Measuring the Impact of an ERP Project at SMEs
DeLone, W.H., & McLean, E.R. (2003). The DeLone and McLean model of information systems success: a ten-year update. Journal of Management Information Systems, 19(4), 9–30.
Irani, Z., Ezingeard, J.N., & Grieve, R.J. (1997). Integrating costs of manufacturing IT/IS infrastructure into the investment decision-making process. Technovation, 17(11/12), 695-706.
DeLone, W.H., & McLean, E.R. (1992). Information systems success: The quest for the dependent variable. Information Systems Research, 3(1), 60-95.
Irani, Z. (2002). Information systems evaluation: Navigating through the problem domain. Information and Management, 40(1), 199–211.
Eden, C., & Huxham, C. (1996). Action research for management research. British Journal of Management, 71(1), 75-86. Gunasekaran, A., Ngai, E. W. T., & McGaughey, R.E. (2006). Information technology and systems justification: A review for research and applications. European Journal of Operational Research, 173(3), 957-983. Cunningham, J.B. (1995). Strategic considerations in using action research for improving personnel practices. Public Personnel Management, 24(2), 515-529. Heizer, J., & Render, B. (2003). Operations management—International edition (7th ed.). Pearson Education Inc, Upper Saddle River, NJ . Holland, C.P., & Light, B. (1999). A critical success factors model for ERP implementation. IEEE Software, 16(3), 30-35. Huin, S.F. (2004). Managing deployment of ERP systems in SMEs using multi-agents. International Journal of Project Management, 22(6), 511-517. Hunton, J.E., & Bieler, J.D. (1997). Effects of User Participation in Systems Development: A Longitudinal Field Experiment. MIS Quarterly, 21(4), 359-388. Hunton, J.E., Lippincott, B., & Reck, J.L. (2003). Enterprise resource planning systems: comparing firm performance of adopters and nonadopters. International Journal of Accounting Information Systems, 4(3), 165–184.
Kaplan, R.S., & Norton, D.P. (1992). The Balanced Scorecard-measures that drive performance. Harvard Business Review, 70(1), 71-80. Jiang, J.J., & Klein, G. (1999). User evaluation of information systems: By system typology. IEEE Transactions on Systems Man and Cybernetics, 29(1), 111-116. Lin, C., & Pervan, G. (2003). The practice of IS/IT benefits management in large Australian organisations. Information and Management, 41(1) 13-24. Marri, H., Gunasekaran, A., & Grieve, R. (1998). An investigation into the implementation of the computer integrated manufacturing in small and medium sized enterprises. International Journal of Advanced Manufacturing Technology, 14, 935-42. Marsick, V.J., & Watkins, K.E. (1997). Case study research methods’ in Swanson, R.A. and Holton, E.F. (eds), Human Resource Development Research Handbook (pp 138-157). San Franscisco, CA: Berret-Koehler. Mirani, R., & Lederer, A.L. (1998). An instrument for assessing the organisational benefits of IS projects. Decision Sciences, 29(40), 803-838. Morabito, V., Pace S., & Previtali, P. (2005). ERP marketing and Italian SMEs. European Management Journal, 23(5), 590-598. Motwani, J., Subramanian, R., & Gopalakrishna, P. (2005). Critical factors for successful ERP implementation: Exploratory findings from four case studies. Computers in Industry, 56, 529-544.
1457
Measuring the Impact of an ERP Project at SMEs
Nicolaou, A. (2004). Quality of post implementation review for enterprise resource planning systems. International Journal of Accounting Information Systems, 5(1), 25-49. Pitt, L.F., Watson, R.T., & Kavan, C.B. (1995). Service quality: A measure of information systems effectiveness. MIS Quarterly, 19(2), 173-185. Poston, R., & Grabski, S. (2001). Financial impact of enterprise resource planning implementations. International Journal of Accounting Information Systems, 2(4), 271-294. Ramayah, T., Roy, M.H., Arokiasamy, S., Zbib, I., & Ahmed, Z.U. (2007). Critical success factors for successful implementation of enterprise resource planning systems in manufacturing organisations. International Journal of Business Information Systems, 2(3), 276-297. Saarinen, T. (1996). An expanded instrument for evaluating information systems success. Information and Management, 31(2), 103-118. Sharma, M. K., & Bhagwat, R. (2006). Performance measurements in the implementation of information systems in small and medium-sized enterprises: A framework and empirical analysis. Measuring Business Excellence, 10(4), 8-21. Sun, A. Y. T., Yazdani, A., & Overend, J.D. (2005). Achievement assessment for enterprise resource planning (ERP) system implementations based on critical success factors (CSFs). International Journal of Production Economics, 98(2), 189-203.
1458
Torkzadeh, G., & Doll, W.J. (1999). The development of a tool for measuring the perceived impact of information technology on work. Omega- The International Journal of Management Science, 27(3), 327-339. Umble E., Haft, R., & Umble, M. (2003). Enterprise resource planning: Implementation procedures and critical success factors. European Journal of Operational Research, 146(2), 241-257. Walsham, G. (1995). Interpretive case studies in IS research: Nature and method. European Journal of Information Systems, 4, 74-81. Wieder, B., Booth, P., Matolcsy, Z. P., & Ossimitz, M.L. (2006). The impact of ERP systems on firm and business process performance. Journal of Enterprise Information Management, 19(1) 13-29. Willcocks, L. (1994). Introduction of capital importance. In L.Willcocks (1994) (Ed.), Information Management: The Evaluation of Information Systems Investments (pp. 1-27). London: Chapman & Hall. Yen, R., & Sheu, C. (2004). Aligning ERP implementation with competitive priorities of manufacturing firms: an exploratory study. International Journal of Production Economics, 92(3) 207–220. Yin, R.K. (1994). Case Study Research Design and Methods (2nd ed.). Sage, Thousand Oaks.
Measuring the Impact of an ERP Project at SMEs
Appendix Methodological Deployment of the Six Imperatives Framework Imperative
Measure
Relevant Literature
% reduction of COGS/revenues
Net benefits proposed by DeLone and McLean , (2003)
% reduction of inventory holding cost
SCOR model
% reduction in logistics costs
SCOR model
% reduction of defects
Total Quality Management Theory
% reduction of lead time
SCOR model
Time-based Heizer & Render, (2003)
Processing time along critical path
Project management theory
Cost-based Heizer & Render, (2003)
Direct Project Costs
Irani et al., (1997); Irani, (2002)
Indirect Human Costs
Irani et al., (1997); Irani, (2002)
Indirect Organisational Costs
Irani et al., (1997); Irani, (2002)
Reduction in process time
Cha-Jan Chang & King, (2005)
Reduction of process cost
Cha-Jan Chang & King, (2005)
Impact on job Cha-Jan Chang & King, (2005)
Number of errors per function/ department
Cha-Jan Chang & King, (2005)
Ease of use (Cha-Jan Chang & King, 2005)
Number of training hours per user
Cha-Jan Chang & King, (2005)
Number of consultant days per user
Cha-Jan Chang & King, (2005)
Rate of sick leave, or staff turnover
New
% increase in employees’ productivity
SCOR model
Number of restoration hours
New
Number of maintenance hours
New
Accessibility Cha-Jan Chang & King, (2005)
Number of hours that the information was not received in timely manner.
Cha-Jan Chang & King, (2005)
Accuracy (Cha-Jan Chang & King, 2005)
Number of errors in reports
Cha-Jan Chang & King, (2005)
Strategy analysis Financial DeLone and McLean, (2003)
Marketing Mirani and Lederer, (1998) Investment concerns
Process assessment Impact on internal processes (Cha-Jan Chang & King, 2005) User needs
Impact on knowledge/learning Cha-Jan Chang & King, (2005)
Technology requirements Systems reliability DeLone and McLean, (2003) Cha-Jan Chang & King, (2005) Information effectives
1459
Measuring the Impact of an ERP Project at SMEs
Appendix continued Flexibility (Cha-Jan Chang & King, 2005)
Number of hours for customization and enhancements
Cha-Jan Chang & King, (2005)
Service Responsiveness Cha-Jan Chang & King, (2005)
Response time and maintenance fees
Cha-Jan Chang & King, (2005)
Service Flexibility Cha-Jan Chang & King, (2005)
Response time and maintenance fees
Cha-Jan Chang & King, (2005)
Vendor Features
This work was previously published in Enterprise Information Systems for Business Integration in SMEs: Technological, Organizational, and Social Dimensions, edited by Maria Manuela Cruz-Cunha, pp. 1-14, copyright 2010 by Business Science Reference (an imprint of IGI Global).
1460
1461
Chapter 6.3
Managing Temporal Data Abdullah Uz Tansel Baruch College, CUNY, USA
INTRODUCTION In general, databases store current data. However, the capability to maintain temporal data is a crucial requirement for many organizations and provides the base for organizational intelligence. A temporal database maintains time-varying data, that is, past, present, and future data. In this chapter, we focus on the relational data model and address the subtle issues in modeling and designing temporal databases. A common approach to handle temporal data within the traditional relational databases is the DOI: 10.4018/978-1-60566-242-8.ch004
addition of time columns to a relation. Though this appears to be a simple and intuitive solution, it does not address many subtle issues peculiar to temporal data, that is, comparing database states at two different time points, capturing the periods for concurrent events and accessing times beyond these periods, handling multi-valued attributes, coalescing and restructuring temporal data, and so forth, [Gadia 1988, Tansel and Tin 1997]. There is a growing interest in temporal databases. A first book dedicated to temporal databases [Tansel at al 1993] followed by others addressing issues in handling time-varying data [Betini, Jajodia and Wang 1988, Date, Darwen and Lorentzos 2002, Snodgrass 1999].
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Managing Temporal Data
TIME IN DATABASES The set T denotes time values and it is a total order under ‘≤’ relationship. Because of its simplicity, we will use natural numbers to represent time {0, 1 ... now}. The symbol 0 is the relative origin of time and now is a special symbol that represents the current time. Now advances according to the time granularity used. There are different time granularities, such as seconds, minutes, hours, days, month, year, etc. (for a formal definition see [Betini, Jajodia and Wang 1988]). A subset of T is called a temporal set. A temporal set that contains consecutive time points {t1, t2... tn} is represented either as a closed interval [t1, tn] or as a half open interval [t1, tn+1). A temporal element [Gadia 1988] is a temporal set that is represented by the disjoint maximal intervals corresponding to its subsets having consecutive time points. Temporal sets, intervals, and temporal elements can be used as time stamps for modeling temporal data and are essential constructs in temporal query languages. Temporal sets and temporal elements are closed under set theoretic operations whereas intervals are not. However, intervals are easier to implement. Time intervals, hence temporal elements and temporal sets, can be compared. The possible predicates are before, after, meet, during, etc. [Allen 1983]. An interval or a temporal set (element) that includes now expends in its duration. Other symbols such as forever or until changed are also proposed as alternatives to the symbol now for easier handling of future data. There are various aspects of time in databases [Snodgrass 1987]. Valid time indicates when a data value becomes effective. It is also known as logical or intrinsic time. On the other hand, the transaction time (or physical time) indicates when a value is recorded in the database. User defined time is application specific and is an attribute whose domain is time. Temporal databases are in general append-only that is, new data values are added to the database instead of replacing the old values. A database that supports valid time
1462
keeps historical values and is called a valid time (historical) database. A rollback database supports transaction time and can roll the database back to any time. Valid time and transaction time are orthogonal. Furthermore, a bitemporal database that supports both valid time and transaction time is capable of handling retroactive and post-active changes on temporal data. In the literature, the term temporal database is generically used to mean a database with some kind of time support. In this chapter we focus our discussion on the valid time aspect of temporal data in relational databases. However, our discussion can easily be extended to databases that support transaction time or both as well.
REPRESENTING TEMPORAL DATA A temporal atom is a time stamped value, and represents a temporal value. It asserts that the value v is valid over the period of time stamp t that can be a time point, an interval, temporal set, or a temporal element. Time points are only suitable for values that are valid at a time point not over a period. Time can be added to tuples or attributes and hence, temporal atoms can be incorporated differently into the relational data model. To represent temporal atoms in tuple time stamping, a relation is augmented with two attributes that represents the end points of an interval or a time column whose domain is intervals, temporal sets, or temporal elements (temporally ungrouped). Figure 1 depicts salary (SAL) history of an employee, E1 where intervals or temporal elements are used as time stamps with a time granularity of month/year. Salary is 20K from 1/01 to 5/02 and 8/02 to 6/03. The discontinuity is because the employee quitted at 6/02 and came back at 8/02. The salary is 30K since 6/03. Figure 2 gives the same salary history in attribute time stamping (temporally grouped). An attribute value is a set of temporal atoms. Each relation has only one tuple that carries the entire history. It is also
Managing Temporal Data
Figure 1. Salary in tuple time stamping
Figure 2. Salary in attribute time stamping
possible to create a separate tuple for each time stamped value (temporal atom) in the history, i.e. three tuples for Figure 2.b (two tuples for Figure 2.c). Temporally grouped data models are more expressive than temporally ungrouped data models but their data structures are also more complex [Clifford, Croker, and Tuzhilin 1993]. One noteworthy aspect of data presented in Figure 2 is that the timestamps are glued to attribute values. In other words attribute values are temporal atoms. In forming new relations as a result of query expressions these timestamps stay with the attribute values. On the other hand, in tuple time stamping a timestamp may be implicit (glued) or explicit (unglued) to tuples. This is a design choice and the relations in figure 1 can be interpreted as having implicit or explicit time stamps. An implicit time stamp is not available to the user as a column of a relation though the user can refer to it. On the other hand an explicit time stamp is like any other attribute of a relation and it is defined on a time domain. Implicit time stamps restricts the time of a new tuple created from two constituent tuples, since each tuple may not keep its own timestamp and a new timestamp needs to be assigned to the resulting tuple. Explicit timestamps allow multiple timestamps in a
tuple. In this case, two tuples may be combined to form a new tuple, each carrying its own time reference. However, the user needs to keep track of these separate time references.
TEMPORAL RELATIONS In Figure 3, we show some sample employee data for the EMP relation over the scheme E# (Employee number), ENAME (Employee name), DNAME (Department name) and SALARY. E# and ENAME are (possibly) constant attributes, whereas DNAME and SALARY change over time. In EMP relation, temporal elements are used in temporal atoms for representing temporal data. As is seen, there are no department values for Tom in the periods [2/02, 4/02) and [8/02, now]. Perhaps, he was not assigned to a department during these time periods. Time stamp of E# represents the lifespan of an employee that is stored in the database. Note that EMP is a nested [NINF] relation. It is one of the many possible relational representations of the employee data [Gadia 1988, Clifford and Tansel, 1985, Tansel 2004]. Figure 4 gives, in tuple time stamping, three 1NF relations, EMP_N, EMP_D, and EMP_S
1463
Managing Temporal Data
Figure 3. The EMP relation in attribute time stamping
Figure 4. The EMP relation in tuple time stamping
for the EMP relation of Figure 3 [Lorentzos and Johnson 1987, Navathe and Ahmed 1987, Sarda 1987, Snodgrass 1988]. In Figure 3 temporal sets (elements) can also be used as the time reference. Similarly, in the relations of Figure 4 intervals or temporal sets (elements) can also be used as the time reference in a time attribute that replaces the Start and End columns. Note that in tuple time stamping a relation may only contain attributes whose values change at the same time. However, attributes changing at different times require separate relations. Each particular time stamping method imposes restrictions on the type of base relations allowed as well as the new relations that can be generated from the base relations. The EMP relation in Figure 3 is a unique representation of the employee data where each tuple contains the entire history of an employee [Clifford and Tansel 1985, Tansel 1987, Gadia 1988]. E# is a temporal grouping identifier regardless of the timestamp used [Clifford, Croker and Tuzhilin 1993]. In the case of tuple time stamping an employee’s data is dispersed into several tuples, i.e., there are three salary tuples for the employee 121 in Figure 4.c. These tuples
1464
belong to the same employee since their E# values are equal. For the relations in Figures 3 and 4 there are many other possible representations that can be obtained by taking subsets of temporal elements (intervals) and creating several tuples for the same employee. These relations are called weak relations [Gadia 1988]. Though they contain the same data as the original relation in unique representation, query specification becomes very complex. Weak relations naturally occur in querying a temporal database. Weak relations can be converted to an equivalent unique relation by coalescing tuples that belong to the same object (employee) into one single tuple [Sarda 1987, Bohlen, Snodgrass and Soo 1996]. In coalescing, a temporal grouping identifier such as E# is used to determine the related tuples.
DESIGNING TEMPORAL RELATIONS Design of relational databases is based on functional and multivalued dependencies. These dependencies are used to decompose relations to
Managing Temporal Data
3NF, BCNF or 4NF that have desirable features. In temporal context, functional dependencies turn into multivalued dependencies whereas multivalued dependencies stay as multivalued dependencies. For instance, in the current employee data depicted in Figure 3, E# → SALARY holds. When temporal data for the employees are considered, this functional dependency turns into a multivalued dependency, i.e. E# →→ (Time, SALARY). Designing nested relations based on functional and multivalued dependencies are explored in [Ozsoyoglu and Yuan 1987]. They organize the attributes into a scheme tree where any root to leaf branch is a functional or multivalued dependency. Such a scheme tree represents a N1NF relation without undesirable data redundancy. This methodology is applied to the design of temporal relations in [Tansel and Garnett 88]. Let E# be a temporal grouping identifier. Figure 5 gives the scheme tree for the EMP relation of Figure 3. The dependencies in EMP are E# → ENAME, E#→→ (Time, DNAME), and E# →→ (Time, SALARY). Naturally, the flat equivalent of EMP is not in 4NF since it includes multivalued dependencies. When we apply 4NF decomposition on EMP we obtain flat relation schemes (E#, ENAME), (E#, (Time, DEPT)), and (E#, (Time, SALARY)) which are all in 4NF. In fact, these are the EMP_N, EMP_D, and EMP_S relations in Figure 4 where Time is separated as two additional columns. This is the reason for including only one time varying attribute in a temporal relation in case of tuple timestamping. Thus, any attribute involved in a multivalued dependency on the temporal grouping identifier is placed into a separate relation in the case of tuple time stamping as is seen in Figure 4. If attribute time stamping and nested relations are used all the time dependent attributes that belong to similar entities, such as employees can be placed in one relation as seen in Figure 3.
Figure 5. Scheme Tree for EMP
REQUIREMENTS FOR TEMPORAL DATA MODELS A temporal database should meet the following requirements [Tansel and Tin 1997]. Depending on application needs some of these requirements can be relaxed. Let DBt denote the database state at time t: 1. )>> The data model should be capable of modeling and querying the database at any instance of time, i.e., Dt. Note that when t is now, Dt corresponds to a traditional database. 2. )>> The data model should be capable of modeling and querying the database at two different time points, intervals and temporal set (elements) i.e., Dt and Dt´ where t ≠ t´. 3. )>> The data model should allow different periods of existence in attributes within a tuple, i.e., non-homogenous (heterogeneous) tuples should be allowed. Homogeneity requires that all the attribute values in a tuple should be defined on the same period of time [Gadia 1988]. 4. )>> The data model should allow multi-valued attributes at any time point, i.e., in Dt. 5. )>> A temporal query language should have the capability to return the same type of objects it operates on. This may require coalescing several tuples to obtain the desired result. 6. )>> A temporal query language should have the capability to regroup the temporal data according to a different grouping identifier that could be one of the attributes in a temporal relation.
1465
Managing Temporal Data
7. )>> The model should be capable of expressing set-theoretic operations, as well as set comparison tests, on the timestamps, be it time points, intervals, or temporal sets (elements).
TEMPORAL QUERY LANGUAGES Modeling of temporal data presented in the previous sections also has implications for the temporal query languages that can be used. A temporal query languages is closely related how the temporal atoms (temporal data) are represented, i.e. the type of timestamps used, where they are attached (relations, tuples, or attributes), and whether temporal atoms are kept atomic or broken into their components. This in turn determines possible evaluation (semantics) of temporal query language expressions. There are two commonly adopted approaches: 1) Snapshot evaluation that manipulates the snapshot relation at each time point, like Temporal Logic [Gadia 1986, Snodgrass 1987, Clifford, Croker and Tuzhilin 1993]; 2) Traditional evaluation that manipulates the entire temporal relation much like the traditional query languages [Tansel 1986, Tansel 1997, and Snodgrass 1987]. The syntax of a temporal query language is designed to accommodate a desired type of evaluation. Temporal Relational Algebra includes temporal versions of relational algebra operations in addition to special operations for reaching time points within intervals or temporal elements [Lorentzos, and Mitsopoulos, 1997, Sarda 1997, Tansel 1997], slicing times of attributes or tuples [Tansel 1986], rollback to a previous state in case of transactions time databases, and temporal aggregates. There are also projection and selection operations on the time dimension. These operations are all incorporated into temporal relational calculus languages too. There are many language proposals for temporal databases [Lorentzos, and Mitsopoulos 1997, Snodgrass 1995, Snodgrass 1987, Tansel,
1466
Arkun and Ozsoyoglu 1989]. SQL2 has a time data type for implementing tuple time stamping with intervals that is used in TSQL2 [Snodgrass 1995]. SQL3 has the capabilities to implement tuple timestamping, temporal elements as well as attribute time stamping and nested relations.
IMPLEMENTATION OF TEMPORAL RELATIONS Temporal relations that use tuple timestamping have been implemented on top of conventional DBMS [Snodgrass 1995]. This was doable because, augmenting a relation with two additional columns representing the end points of time intervals is available in any DBMS. However, recently, commercial DBMS include object relational features that allow definition of temporal atoms, sets of temporal atoms and temporal relations that are based on attribute timestamping. Following code illustrates the definition of EMP relation in ORACLE 9i. Lines 1 and 2 define temporal atoms with data types of Varchar and Number. Lines 3 and 4 define the sets of temporal atoms for representing the department and salary histories. Finally line 5 creates the EMP table. Note that, transaction time bitemporal relations may similarly be defined [Tansel & Atay 2006]. 1. )>> CREATE TYPE Temporal_Atom_Varchar AS OBJECT ( Lowe_Bound TIMESTAMP, Upper_Bound TIMESTAMP, TA_Value VARCHAR(20)); 2. )>> CREATE TYPE Temporal_Atom_Number AS OBJECT ( Lowe_Bound TIMESTAMP, Upper_Bound TIMESTAMP, TA_Value NUMBER);
Managing Temporal Data
3. )>> CREATE TYPE Dept_History AS TABLE OF Temporal_Atom_Varchar; 4. )>> CREATE TYPE Salary_History AS TABLE OF emporal_Atom_Number; 5. )>> CREATE TABLE Emp ( E# NUMBER, Name VARCHAR(20), Dept Dept_History, Salary Salary_History);
CONCLUSION In this chapter, we have examined the management of temporal data by using relational database theory. We have covered the two types of approached to modeling temporal data: tuple timestamping and attribute timestamping. We also discussed design of temporal databases and temporal query languages. The feasibility of implementing tuple timestamping has already been demonstrated. We also show the feasibility of implementing temporal atoms and attribute timestamping by using object relational databases that are currently available in the market.
Acknowledgment Research is supported by the grant# 68180 00-37 from the PSC-CUNY Research Award Program.
References Allen, J. F. (1983). Maintaining knowledge about temporal intervals. Communications of the ACM, 26(11), 832–843. doi:10.1145/182.358434 Betini, C., Jajodia, S., & Wang, S. (1988). Time granularities in databases, data mining and temporal reasoning. Springer Verlag.
Bohlen, M. H., Snodgrass, R. T., & Soo, M. D. (1996). Coalescing in temporal databases. In Proceedings of International Conference on Very Large Databases. Clifford, J., Croker, A., & Tuzhilin, A. (1993). On completeness of historical data models. ACM Transactions on Database Systems, 19(1), 64–116. doi:10.1145/174638.174642 Clifford, J., & Tansel, A. U. (1985). On an algebra for historical relational databases: Two views. In Proceedings of ACM SIGMOD International Conference on Management of Data, 247-265. Date, C. D., Darwen, H., & Lorentzos, N. (2003). Temporal data and the relational data model. Morgan Kaufmann, 2002. Etzion, O., Jajodia, S., & Sripada, S. (1998). Temporal databases: Research and practice. Springer Verlag. Gadia, S. K. (1988). A homogeneous relational model and query languages for temporal databases. ACM Transactions on Database Systems, 13(4), 418–448. doi:10.1145/49346.50065 Lorentzos, N. A., & Johnson, R. G. (1988). Extending relational algebra to manipulate temporal data. Information Systems, 13(3), 289–296. doi:10.1016/0306-4379(88)90040-3 Lorentzos, N. A., & Mitsopoulos, Y. G. (1997). SQL extension for interval data. IEEE Transactions on Knowledge and Data Engineering, 9(3), 480–499. doi:10.1109/69.599935 McKenzie, E., & Snodgrass, R. T. (1991). An evaluation of relational algebras incorporating the time dimension in databases. ACM Computing Surveys, 23(4), 501–543. doi:10.1145/125137.125166 Navathe, S. B., & Ahmed, R. (1987). TSQL-A language interface for history databases. Proceedings of the Conference on Temporal Aspects in Information Systems, 113-128.
1467
Managing Temporal Data
Ozsoyoglu, M. Z., & Yuan, L-Y (1987). A new normal form for nested relations. ACM Transactions on Database Systems, 12(1), January 1987. Sarda, N. L. (1987). Extensions to SQL for historical databases. IEEE Transactions on Systems, 12(2), 247–298. Snodgrass, R. T. (1987). The temporal query language Tquel . ACM Transactions on Database Systems, 12(2), 247–298. doi:10.1145/22952.22956 Snodgrass, R. T. (1995). The TSQL2 temporal query language. Kluwer 1995. Snodgrass, R. T. (1999). Developing time oriented applications in SQL. Morgan Kaufmann. Tansel, A. U et al. (1993). (Ed.). Temporal databases: Theory, design and implementation. Benjamin/Cummings. Tansel, A. U. (1986). Adding time dimension to relational model and extending relational algebra. Information Systems, 11(4), 343–355. doi:10.1016/0306-4379(86)90014-1 Tansel, A. U. (1997). Temporal relational data model. IEEE Transactions on Knowledge and Data Engineering, 9(3), 464–479. doi:10.1109/69.599934 Tansel, A. U. (2004). On handling time-varying data in the relational databases. Journal of Information and Software Technology, 46(2), 119–126. doi:10.1016/S0950-5849(03)00114-9 Tansel, A. U., Arkun, M. E., & Ozsoyoglu, G. (1989). Time-by-Example query language for historical databases. IEEE Transactions on Software Engineering, 15(4), 464–478. doi:10.1109/32.16597 Tansel, A. U., & Eren-Atay, C. (2006). Nested bitemporal relational Algebra. ISCIS, 2006, 622–633.
1468
Tansel, A. U., & Garnett, L. (1989). Nested temporal relations. In Proceedings of ACM feshi SIGMOD International Conference on Management of Data, 284-293. Tansel, A. U., & Tin, E. Expressive power of temporal relational query languages. IEEE Transactions on Knowledge and Data Engineering, 9(1), 120–134. doi:10.1109/69.567055
Key Terms and Definitions Coalescing: Combining tuples whose times are contiguous or overlapping into one tuple whose time reference includes the time of constituent tuples. Homogenous Temporal Relation: Attribute values in any tuple of a relation are all defined on the same period of time. In a heterogeneous relation, attribute values in a tuple may have different time periods. Rollback Operation: Returns a relation that is recorded as of a given time point, interval, or temporal element in a database that supports transaction time. Temporal Data Model: A data model with constructs and operations to capture and manipulate temporal data. Temporal Database: A database that has transaction time and/or valid time support. In the literature it is loosely used to mean a database that has some kind of time support. Temporal Element: Union of disjoint time intervals where no two time intervals overlap or meet. Time Granularity: Unit of time such as seconds, minutes, hours, days, month, year, etc. Time advances by each clock tick according to the granularity used. Time Interval (period): The consecutive set of time points between a lower bound (l) and an upper bound (u) where l < u. The closed interval [l, u] includes l and u whereas the open interval
Managing Temporal Data
(l, u) does not include l and u. Half open intervals, [l, u) or (l, u] are analogously defined. Transaction Time: Designates the time when data values are recorded in the database. Valid Time: Designates when data values become valid.
Endnote )>>
For a detailed coverage of the terminology, see [appendix A in Tansel et al 1993] and [pp. 367 – 413 in Etzion, Jajodia and Sripada 1998].
This work was previously published in Handbook of Research on Innovations in Database Technologies and Applications: Current and Future Trends, edited by Viviana E. Ferraggine, Jorge Horacio Doorn, Laura C. Rivero, pp. 28-36, copyright 2009 by Information Science Reference (an imprint of IGI Global).
1469
1470
Chapter 6.4
Integrative Information Systems Architecture:
Document & Content Management Len Asprey Practical Information Management Solutions Pty Ltd, Australia Rolf Green OneView Pty Ltd, Australia Michael Middleton Queensland University of Technology, Australia
introduction Purpose This chapter discusses the benefits of managing business documents and Web content within the context of an integrative information systems architecture. This architecture incorporates database management, document and Web content management, integrated scanning/imaging, workflow and capabilities for integration with other technologies.
Business Context The ubiquitous use of digital content (such as office documents, email and Web content) for business DOI: 10.4018/978-1-60566-242-8.ch073
decision-making makes it imperative that adequate systems are in place to implement management controls over digital content repositories. The traditional approach to managing digital content has been for enterprises to store it in folder structures on file or Web servers. The content files stored within folders are relatively unmanaged, as there are often inadequate classification and indexing structures (taxonomies and metadata), no adequate version control capabilities and no mechanisms for managing the complex relationships between digital content. These types of relationships include embedded or linked content, content renditions, or control over authored digital documents and published Web content. In some cases enterprises have achieved a form of management control over hardcopy documents that are records of business transactions by using
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Integrative Information Systems Architecture
database applications to register, track and manage the disposal of physical files and documents. These types of file or document “registers” do not provide adequate controls over the capture, retrieval and accessibility to digital content. This deficiency has led to many organizations seeking solutions, such as document management systems, to manage digital business content. Document management systems have generally been implemented to meet regulatory compliance within the context of document recordkeeping requirements, or management of digital archive collections. Otherwise, they have been implemented as solutions for managing specific types of content objects, such as ISO9001 quality management system documentation, engineering drawings, safety documents, and similar. More recently, organizations have sought to acquire Web content management systems with the view to providing controls over digital content that is published to Web sites. The imperative for such a solution may be a commercial one, motivated by product to market visibility, customer service, and profitability. There may also be a response to compliance needs, motivated by managing Web content in the context of “recordkeeping” to satisfy regulatory or governance requirements. The methodology of implementing document or Web content management systems has often been based on a silo approach, with more emphasis on tactical business imperatives than support for strategic enterprise information architecture initiatives. For example, organizations may attempt a Web content management solution without taking into full account digital documents that may be used to create content outside the constraints of Web compatible formats such as XML-defined, but which are subsequently required for publication. Thus, document and Web content management may be viewed as discrete solutions, and business applications may be implemented without an integrative approach using workflow and systems for managing both business documentation and Web content.
Another example of a silo approach is the deployment of database solutions without cognizance of document or Web content management requirements. For example, organizations may deploy a solution for managing contracts, including database application capabilities for establishing the contract, recording payments and variations, and managing contract closure. However, the management of contract documents may not be viewed as an integral part of the application design, or the workflow review and approval, or managing the published contract materials on Web sites. The result is that users often miss vital information rather than manually relate data retrieved through a number of separate applications. There are compelling reasons for organizations as they address the constructs of enterprise information architecture, to consider the management of digital content within the context of an integrative approach to managing business documents and Web content. The strategic rationale for such an approach encompasses the following types of business imperatives: •)>>
•)>>
Customer satisfaction is a key commercial driver for both business and government. In the case of the commercial sector, the need to attract and retain customers, and in the public sector, the need to support government initiatives directed at taxpayer benefits. Organizations are adopting strategic approaches such as single view of customer and one-source solution for customer information, invoking the use of information and knowledge management tools. Speed and quality of product to market is another major business driver. The rapid adaptation of the WWW and e-Commerce systems to support online business transactions opens markets to global competition. Commercial enterprises are not only required to deliver product to market rap-
1471
Integrative Information Systems Architecture
•)>>
idly, but also within quality management constraints, to attract and retain customers. Regulatory imperatives, such as SarbanesOxley in the U.S. (US Congress, 2002) have introduced new measures for creating greater transparency within organizations, which impact corporate governance, and require disclosure with real-time reporting requirements.
The enterprise information architecture would include information policy, standards and governance for the management of information within an organization and provide supporting tools in the form of an integrative information systems architecture as the platform for managing information. An integrative systems architecture would provide a platform that enables businesses to meet the challenges of both commercial and regulatory imperatives, benefit from reusable information, and provide a coherent view of relevant information enterprise-wide to authorized users. In respect to document and Web content management, an integrative document and content management (IDCM) model (Asprey and Middleton, 2003) offers a framework for unification of these components into an enterprise information architecture. The model features the management of both documents and Web content within an integrative business and technology framework that manages designated documents and their content throughout the document/content continuum and supports recordkeeping requirements. Many of the major software vendors, including Oracle, IBM, EMC and Microsoft, have embraced this concept by providing portal, document and content management, collaboration, and workflow capabilities as a combined offering.╯ SharePoint from Microsoft provides these capabilities through integration with their .NET framework allowing for the development of highly customized applications. Vendors such as IBM, Oracle and EMC have provided these capabilities by acquiring and integrating various applications, and these may
1472
be marketed as enterprise content management (ECM) suites.
SCOPE The core IDCM elements that address document and Web content management requirements (capturing content, reviewing/authorizing content, publishing content and archival/disposal) comprise: •)>> •)>> •)>>
•)>> •)>>
Integrated document and Web publishing/ content management capabilities. Integration of document imaging capabilities. Recognition technologies, such as barcodes to assist with capturing document information, or conversion of image data to text as a by-product of scanning/imaging. Enterprise data management capabilities. Workflow.
However, when determining requirements within the context of process improvement initiatives that help to address business imperatives (such as customer satisfaction, product to market, and regulatory compliance), these capabilities might be supported by other technologies. This technology support may help businesses to achieve an integrative systems architecture for deployment of innovative and integrated solutions. Typical integration technologies include: •)>>
•)>>
•)>>
Universal Access/Portal, which allows users to invoke functions and view information (including digital content) via a Web based interface. Data Warehouses, which are centralized repositories for storing enterprise data elements that are derived from business information systems. Enterprise Resource Planning (ERP) systems, human resource systems, financial
Integrative Information Systems Architecture
Figure 1. Information systems architecture – document/content management
systems, and vertical line of business systems. These types of capabilities, when combined, augment an integrative systems architecture to support the development of solutions that take advantage of digital content in managed repositories. Users that access business information then have the confidence that they are accessing, retrieving and printing the most current digital content. This confidence can aid decision making, as end users may not then be required to access physical copies of documents, which can be both cumbersome and time-consuming due to customer expectations on speed of service in a modern technology environment. The following section discusses those features of an integrative information systems architecture that would support a requirement for business
users to gain rapid access to the most up to date digital content via an interface that is simple and intuitive.
SYSTEM FEATUREs Schematic Figure 1 provides a schematic of database management within the context of IDCM and supporting technologies, such as universal interface (e.g. portal), and interfaces to business applications.
Document Management Document management applications are used within enterprises to implement management controls for digital and physical document objects.
1473
Integrative Information Systems Architecture
These objects may include office documents, email and attachments, images and multimedia files, engineering drawings and technical specifications. The systems manage complex relationships between documents and provide capabilities to manage integrity, security, authority and audit. Business classification and metadata capabilities support access, presentation and disposal (Bielawski & Boyle, 1997; Wilkinson et al, 1998), thus supporting organizational knowledge management initiatives. Document management applications typically feature recordkeeping tools to support the implementation of organizational recordkeeping policies, file plans and retention policies
Web Content Management Web content management applications are implemented by organizations to manage digital content that is published to the Web, including Internet, Intranet and Extranet sites. The functionality of Web content management applications can be summarized as content creation, presentation, and management (Arnold, 2003; Robertson, 2003; Boiko, 2002). Organizations are discerning the requirement to have integrated architectures for managing documents and Web content, and to consider the implications of recordkeeping requirements when seeking unified document and content solutions.
Document Imaging Document imaging systems are used to scan and convert hardcopy documents to either analog (film) or digital format (Muller, 1993). Film based images can be registered and tracked within a document management system, similar to the way in which a paper document can be registered and tracked. Digital images can be captured into either a document or Web content management system via integration with the scanning system.
1474
Digital imaging may involve black and white, grayscale or color imaging techniques, depending on the business need, as the requirement may involve scanning a range of document types, e.g. physical copies of incoming correspondence, forms processing or capture of color brochures. Some systems allow users to search and view film images on their local computer by the film being scanned on demand. The speed of viewing film images is slow due to the need for the specified film roll needing to be loaded in the scanning device, for the film to be wound to the correct position, and then for the image to be scanned. Digital images may be stored on magnetic storage allowing rapid recall by users and so supporting not only the records archive process but the demands of workflow and document management.
Recognition Systems Recognition systems such as Barcode recognition, Optical Mark Recognition (OMR), Optical Character Recognition (OCR) or Intelligent Character Recognition (ICR), are often used to facilitate data capture when documents are scanned to digital image. Systems can take advantage of encoding within barcodes, which can be intelligently encoded to assist with improving business processes. For example, a barcode may contain specific characters that identify a geographical region, product type, or details of a specific subject/topic of interest. OMR systems are used to capture recognition marks on physical documents, such as bank checks, during digital scanning. OCR and ICR technologies can allow text to be extracted from a document during digital scanning and conversion processes. The text might be an alphabetical string, or unique number, or words within a zoned area of a form. Depending on the quality of the hardcopy original document, much of the printed information might be convertible to text within acceptable error constraints.
Integrative Information Systems Architecture
OCR and ICR may also be used to convert existing digital documents, such as Tagged Image File (TIF) Format or Portable Document Format (PDF) documents, to a full text rendition capable of being searched for specific words or phrases and so increasing the ability to recall the primary file being the PDF format.
ness functions specific to each system with the required data.
Workflow
Universal Access/Portal applications are used within enterprises to allow the viewing and interpretation of information from a number of sources, providing a single view of information. These sources of information may include data stored within application databases, data stored in warehouses as well as digital documents and other content stored within document and content management systems. The evolution of Universal Access/Portal applications is to encapsulate not only multi-system data but to provide a single application interface for all enterprise systems. Corporate portals may be differentiated by application (Collins, 2003) or level (Terra & Gordon, 2003) but they are typically aimed to enhance basic Intranet capability. This may be achieved by provision of facilities such as personalization, metadata construction within enterprise taxonomy, collaborative tools, survey tools, and integration of applications. The integration may extend to external applications for business-business or business-customer transactions, or for derivation of business intelligence by associated external and internal information. Universal Access/Portal applications include the capability of understanding how the information in each system is related to the business process and the rules by which the data can be combined. They provide different views of information depending on the users needs and allow information to be utilized to greater effect. Just as importantly they provide the controls by which the information capture of the enterprise can be managed to allow the support of both industry and compliance requirements. True on demand reporting of information is provided through
Workflow systems allow more direct support for managing enterprise business processes. These systems are able to automate and implement management controls over tasks from initiation of a process to its closure. Though many application systems can be said to have some level of workflow, they lack the flexibility that enables visual creation and alteration of processes by an enterprise workflow system. More importantly the enterprise workflow allows processes to flow across a number of systems rather than be isolated within a single application. The drivers for implementing workflow may be to improve both efficiency (by automating and controlling work through a business process), and effectiveness (by monitoring the events work may pass through). Ideally, the design of such systems takes into account a unified analysis of enterprise information content (Rockley, Kostur & Manning, 2003). The implementation of workflow is often driven from a technology perspective rather then by business process. Questions of flexibility and interoperability become the drivers for the project rather than enhancements to the business processes. The most marked effect of this is the amplification of bottlenecks within the business process when it is run through a workflow system. Workflow systems provide the integrative framework to support business processes across the enterprise. This is done by providing a capability for the business easily to represent their businesses processes within the workflow system, and by allowing the workflow to call the busi-
SYSTEM INTEGRATION Universal Access/Portal
1475
Integrative Information Systems Architecture
dashboards and snapshots rather then the more passive approach of overnight printed reporting. These applications employ business intelligence to provide the integrative framework to support utilization of information across the enterprise. This framework allows enterprises to expose the applications and business rules as web services to be used by other business systems supporting a service oriented architecture (SOA). Such services may be expanded within larger organizations and outsource companies to offer information management software as a services (SaaS) to smaller companies who may not otherwise be able to implement such an integrative architecture.
Data Warehouse Many organizations use data-warehouse applications to store archived records from various business systems. In such cases data may be accessible only through a separate user interface, therefore requiring manual steps to reference data to records within the originating and other systems. Some data warehouse implementations feature Document/Content Management applications rather then purely database applications. Enterprise Report Management (ERM) formerly known as Computer Output to Laser Disk (COLD) software allows the storage of archived data as files and uses the database to store the metadata allowing searching and retrieval of the data file. The implementation of data repositories within an integrated architecture allows cross referencing and searching of corporate knowledge and information across the enterprise as well as single point of access of information regardless of the repository in which it resides. Integration with Workflow components allows archiving to be done on a transaction basis rather than only batch processing, as well as for archiving of workflow objects as business records.
1476
Enterprise Resource Planning Systems Given the business impetus towards improving customer service, such as optimization of product to market in what are mostly highly volatile environments, there is a need for management to obtain rapid access to strategic, tactical and operational information. Management information is typically stored in database applications, such as ERP systems, human resource systems, financial systems, and vertical line of business systems. However, information stored in databases often needs to be supplemented by copies of supporting documentation. Integration between document/content repositories and operational/ administrative systems enables end users to access all information that is relevant to a particular matter. For example, copies of a contract may be required to support a contract variation, or a supporting invoice may be required to support payment processing.
Enterprise Data Management Each of the components within an IDCM application is highly dependent on database technology for the storage of data, including the metadata relating to the various information objects. More recently, applications like Microsoft Office SharePoint Services (MOSS) 2007 are requiring the database to store not only the metadata but also the actual content replacing file servers within the content tier (Figure 1) with multiple database servers. Software for managing the repositories must accommodate versioning, routing, security and review and approval processes. It should also link to tailored interfaces, be able to deal with different renditions of documents, to maintain associations within complex documents, and to provide for complex queries. Information retrieval development in this area focuses on facilitating indexing and integrating taxonomies (Becker, 2003).
Integrative Information Systems Architecture
Table 1. A Summary of critical issues Business Issues
Technology Issues
Cost The cost of an information systems architecture is likely to be higher than a tactical or silo solution.
Infrastructure The infrastructure within the organization may not be adequate for the deployment of the information systems architecture.
Planning The extent of planning required for the acquisition and implementation of a strategic enterprise platform is likely be longer than that required for a tactical solution.
Systems Integration The integration between the components of the solutions may be jeopardized if technical specifications are not correctly and thoroughly defined, or if package selection process is not wellmanaged.
Specifications The extent of specifications required for a strategic enterprise platform is likely to be more extensive than that required for a tactical solution.
Evolving Technology The evolving nature of technology, including technology convergence, may impact the long term rollout of the information systems architecture.
Benefits Realization Benefits of a strategic solution may not be realized as early as those for a tactical solution. However, the benefits may be more enduring.
Security The nature of the architecture, including integrated components, is likely to involve the development of detailed specifications to ensure seamless access to authorized information.
Lack of Business Unit Buy-In Autonomous business units may not see the benefit of a strategic enterprise solution, focusing instead on specific tactical and operational requirements.
Disaster Recovery Storage of large amounts of data made available on demand introduces the need for complex backup procedures and longer recovery times.
The greater use of Portal functions including Collaboration tools and Knowledge tools, including Wiki stores, has increased both the amount of information being stored and the number of database calls being made. Traditionally these applications used documents to store much of the data but newer implementations have opted to store all information within the database for reasons including simpler searching. The improvement of information usage increases the number of data calls to the supporting databases and the need for data to be accessible across a wider geographic area. Data supporting document/content management, workflow and other business systems may be required to be managed across a distributed architecture of replicated and cached data stores.
reasons for utilization The implementation of systems for managing digital and Web content in silo systems may only solve a tactical business requirement. Essentially,
controls over content are implemented. However, silo approaches to the implementation of systems, while they perhaps solve tactical requirements, might not be the most strategic and innovative approach to help solving the key imperatives facing enterprises. Solutions that embrace integrated document and Web content management, combined with universal interface/portal applications, and integration with business systems, may support better opportunities for business process improvement. With respect to knowledge management, document and Web content management solutions support enterprise knowledge management initiatives, where the knowledge has been made explicit. The strategic approach to managing both data in databases and digital content using an integrative systems architecture, may provide a more cohesive and coherent management of information, with the potential to add more value to knowledge management initiatives (Laugero & Globe, 2002). The use of the universal interface or portal will help to address usability issues with document and content management applications.
1477
Integrative Information Systems Architecture
While these systems contain a wide range of functionality, the end user may not need to invoke much of the functionality, and an enterprise portal style interface approach allows better presentation and management of end user functions and viewing capabilities. Organizations may find that it is difficult to cost justify digital document management or Web content management applications based purely on information management notions of “better managing enterprise document collections”. The investment in the technology may outweigh the perceived benefits. However, the approach of solving strategic management challenges using an integrative systems architecture to deliver end-toend business applications, may make it easier to support business justification. At the same time, the enterprise is able to secure a managed repository for documents and Web content to support information policy and planning.
Critical Issues Table 1 summarizes some of the critical issues that enterprises may need to consider when reviewing requirements for an information systems architecture that features integrative document and content management capabilities.
conclusion The acquisition of enterprise document and Web content management systems might not be justified based only on notions of good information management or contribution to knowledge management. The likelihood of a document and content management project targeted at the enterprise being successful may be enhanced significantly by incorporating the capabilities of these systems into an integrative information systems architecture. This platform may then allow business to initiate projects that deliver a wide range of business
1478
process improvement initiatives, all relying on a supporting and consistent enterprise platform, and which provides easy access to authorized users of information. Thus, the platform becomes strategically positioned to support business imperatives in the strategic, tactical and operational hierarchy of an organization, and allow the deployment of innovative and stable end-to-end business solutions.
references Addey, D., Ellis, J., Suh, P., & Thiemecke, D. (2002). Content management systems. Birmingham, UK: glasshaus. Arnold, S. E. (2003). Content management’s new realities. Online, 27(1), 36–40. Asprey, L., & Middleton, M. (2003). Integrative document and content management: strategies for exploiting enterprise knowledge. Hershey, PA: Idea Group. Becker, S. A. (2003). Effective databases for text & document management. Hershey PA: IRM Press. Bielawski, L., & Boyle, J. (1997). Electronic document management systems: a user centered approach for creating, distributing and managing online publications. Upper Saddle River, NJ: Prentice-Hall. Boiko, B. (2002). Content management bible. New York: Hungry Minds. Collins, H. (2003). Enterprise knowledge portals. NY: American Management Association. Laugero, G., & Globe, A. (2002). Enterprise content services: a practical approach to connecting content management to business strategy. Boston, MA: Addison-Wesley. Microsoft (2007) SharePoint Server home page retrieved 14th March, 2008, from http:// office.microsoft.com/en-us/sharepointserver/ FX100492001033.aspx
Integrative Information Systems Architecture
Muller, N. J. (1993). Computerized document imaging systems: technology and applications. Boston, MA: Artech House. Robertson, J. (2003). So, what is a content management system? Retrieved 14th March, 2008, from http://www.steptwo.com.au/papers/kmc_what/ index.html Rockley, A., Kostur, P., & Manning, S. (2003). Managing enterprise content: a unified content strategy. Indianapolis, IN: New Riders. Terra, J., & Gordon, C. (2003). Realizing the promise of corporate portals. Boston, MA: Butterworth-Heinemann. U.S. Congress. (2002, July 30) Public Law 107204: Sarbanes-Oxley Act of 2002 Retrieved July 16th 2004 from http://frwebgate.access.gpo.gov/ cgi-bin/getdoc.cgi?dbname=107_cong_public_laws&docid=f:publ204.107.pdf Wilkinson, R. Arnold-Moore, T., Fuller, M., Sacks-Davis, R., Thom J., & Zobel, J. (1998). Document computing: technologies for managing electronic document collections. Boston, MA: Kluwer Academic Publishers.
KEY TERMS AND DEFINITIONS Document Capture: Registration of an object into a document, image or content repository. Document Imaging: Scanning and conversion of hardcopy documents to either digital image format or analogue (film). Document Management: Implements management controls over digital documents via integration with standard desktop authoring tools (word processing, spreadsheets and other tools) and document library functionality. Registers and tracks physical documents. IDCM: Integrative document and content management. Portal: User interface to a number of process and information sources. Recognition Technologies: Technologies such as barcode recognition, optical character recognition (OCR), intelligent character recognition (ICR), and optical mark recognition (OMR) that facilitate document registration and retrieval. Web Content Management: Implementation of a managed repository for digital assets such as documents, fragments of documents, images and multimedia that are published to intranet and Internet WWW sites. Workflow Software: Tools that deal with the automation of business processes in a managed environment.
This work was previously published in Handbook of Research on Innovations in Database Technologies and Applications: Current and Future Trends, edited by Viviana E. Ferraggine, Jorge Horacio Doorn, & Laura C. Rivero, pp. 682-692, copyright 2009 by Information Science Reference (an imprint of IGI Global).
1479
1480
Chapter 6.5
Identifying and Managing Stakeholders in Enterprise Information System Projects Albert Boonstra University of Groningen, The Netherlands
Abstract This article focuses on how managers and sponsors of enterprise information system (EIS) projects can identify and manage stakeholders engaged in the project. This article argues that this activity should go beyond the traditional ideas about user participation and management involvement. Also suppliers, customers, government agencies, business partners and the general public can have a clear interest in the ways that the system will be designed and implemented. This article proposes to apply identification, analysis and intervention techniques from organization and management disciplines in the IS field to enhance the changes for the successfulness of enterprise information system implementations. Some of these techniques are combined in a coherent method that may help implementers of complex IS projects to identify and categorize stakeholders and to consider ap-
propriate ways of involvement during the various stages of the project.
Introduction Information system implementation projects traditionally affect a number of parties, including managers, developers, and users. The notion that managers and developers allow users to participate in system development has been a core topic of IS research and practice since the 1960s. Mumford is one of the main advocates of this notion, by arguing that “people at any level in a company, if given the opportunity and some help, can successfully play a major role in designing work systems” (Mumford, 2001, p. 56). Main reasons for participation are the assumed link between participation and system success in terms of system quality, user satisfaction, user
Copyright © 2011, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Identifying and Managing Stakeholders in Enterprise Information System Projects
acceptance, and system use (Markus & Mao, 2004). Mintzberg (1994) argues that people who have been consulted and have participated in the process will better understand the trade-offs between project benefits and disadvantages and have greater trust. Consequences of neglecting participants, on the other hand, may lead to system failure and resistance towards the system (Gonzalez & Dahanayake, 2007). During recent decades, however, the traditional notion of users has been eroded by new trends in IS development, such as package installations, outsourcing, enterprise systems, and systems that cross organizational boundaries, and has changed the nature of IS practice. These trends indicate that modern information systems are increasingly complex since they affect a broader range of stakeholders both from within and from outside the organization. This wider group of stakeholders is also becoming an integral part of EIS implementation and is part of the ‘sociology of technology.’ Depending on the impact and scope of the system, these stakeholders may include suppliers and customers, business partners (such as banks), providers of IS/IT services, competitors, government agencies, and, in some cases, may well extend to the press and the general public. Information systems tend to increase the scope from smaller, internal, and functional areas to enterprise wide systems (such as ERP-systems) and systems that cross company boundaries and may well impact on personnel from different countries, with their own language and different value and legal systems. This means that system development is increasingly an undertaking where many different people believe that they can affect or can be affected by the process or the outcome of the system. Many of these people will respond to system proposals according to their interests and their perception of the impact, the function and the objectives of the system (Rost, 2004). If project managers and others, responsible for the development and the implementation, are not prepared to take
into account these wider requirements, they will be reactive rather than proactive. In such circumstances, as Boonstra (2006) elucidates, the progress will be shattering and shocked by all kinds of unexpected actions and responses from a variety of stakeholders during the various stages of the project. To take into account such considerations and be proactive, a systematic stakeholder management is needed in the more complicated information system projects. Stakeholder management means that stakeholders around EIS (Enterprise Information System) projects are identified and analyzed so that appropriate actions are taken in ways that support the project (McElroy & Mills, 2003). To address this need for stakeholder management, the objective of this article is to propose a coherent set of techniques that can be used to identify and categorize stakeholders and stakeholder relations in EIS projects as well as to define appropriate actions towards stakeholders. Of course methods exist for stakeholder identification and categorization. However, these methods are diverse in focus and nature and not specified to EIS development and implementation. Some exclusively focus on identification of stakeholders (Vos & Achterkamp, 2006), others on assessing their relative importance (Mitchell, Agle, & Wood, 1997) or on ways to involve them (Bryson, 2004). The contribution of this article is that it will specifically focus on stakeholders within complex enterprise IS projects and that it combines various theories and approaches into a practical and consistent method. The article is structured as follows. It begins with some theoretical backgrounds of the stakeholder management approach. This section will briefly discuss literature on stakeholders and its most important findings. The next section discusses the seven activities of the method and combines these activities into a coherent method. The details of the proposed method will be illustrated by a case study. The discussion section examines the extent to which the method contributes to solv-
1481
Identifying and Managing Stakeholders in Enterprise Information System Projects
ing the stakeholder identification, categorization and intervention problem in IS contexts. Finally, the conclusion section will discuss the strengths and limitations of the method, by providing some suggestions for further work and by suggesting some guidance for practitioners.
Background Orlikowski (1992) proposes that the results of information systems depend on the interaction of both technology and people over an extended period. Information systems are both a product of human action and an influence of human action. People initiate, design, and use an IT system. Designers construct the system according priorities and expectations. Then, various stakeholders, such as users and managers, react in different ways, for example, by welcoming, rejecting, or adapting it. In doing so, they socially construct the technology in the sense that these reactions may become features of the system. This continuing interaction means that the results eventually obtained are different from those, which were originally expected. Orlikowski also suggests that stakeholders around the system can modify technologies during design, implementation, and use, since people and technology interact. Because of this, information systems are not primarily technical systems but social systems and their design, implementation, and use involve dynamic social and political processes in articulating interests, building alliances, and struggling over outcomes (Levine & Rossmoore, 1994). To understand how a system functions in a business, we must understand the nature of the relationship between various stakeholders and their interactions with the system. Stakeholder management should be based on such an understanding. According to Mitchell et al. (1997), stakeholders can be defined in many different ways. Different kinds of entities can be stakeholders, such as persons, groups inside as well as outside
1482
an organization. Within this research, Freeman’s classical definition is a useful starting point. He defines stakeholders as follows: “A stakeholder is any group or individual who can affect or is affected by the achievement of the organization’s objectives” (Freeman, 1984, p. 46). In the case of an EIS project, this implies that any person or group who can affect or is affected by the EIS system is a stakeholder. Stakeholders’ interests can be very diverse, since information systems affect many aspects of the business, including strategic position, cost-effectiveness, job satisfaction, security, customer satisfaction, and commercial success. This implies that many parties around an EIS project can be stakeholders. McLoughlin (1999) adds an important perspective to the definition of Freeman by arguing that stakeholders are “those who share a particular set of understandings and meanings concerning the development of a given technology.” He emphasizes the interpretive nature of stakeholders. Stakeholders are groups or individuals who believe that they can affect or are affected by an information system. This means that stakeholders are determined by perceptions and not by objective realities. This is in line with social constructivist (Pinch & Bijker, 1997) and interpretive (Walsham, 1993) views on reality and technology. Walsham argues that people act according to subjective interpretations of their world, “our knowledge of reality ... is a social construction by human actors … there is no objective reality which can be discovered by researchers and replicated by others” (1993, p.5). Stakeholders interpret the existing context and may act to change it to promote personal, local, or organizational objectives (Boddy, 2004). This implies that stakeholders are potential actors—people or groups with the power to respond to, negotiate with, and change the specifications and eventually impact the success of the system. Project managers and other promoters of new information systems have the responsibility to manage this process of stakeholder activity by creating a context for dealing with stakeholders
Identifying and Managing Stakeholders in Enterprise Information System Projects
Figure 1. Stakeholder management activities
in effective ways. The method as presented in this article is developed to facilitate the creation of this context.
Stakeholder Management Activities Figure 1 briefly describes the method as presented in this article by seven activities that can be conducted to form the basis for sound stakeholder management. These activities are 1) identify stakeholders, 2) determine the phase of involvement, 3) determine their roles within the project, 4) determine objects of involvement and related interests, 5) assess the relative importance of stakeholders, 6) determine the degree of involvement, and 7) develop an action plan. Each activity will be explained in more detail in the following paragraphs. The order of these activities will depend on the preference of the user of the method and the
specific project characteristics. The order as presented in Figure 1 can be followed at the start, but often activities have to be reconsidered, given the results of subsequent activities. Mintzberg also argues that stakeholder management should be “… an ongoing and adaptive process …” (1994, p. 74). Therefore, the activities of stakeholder management are often interactive and dynamic (see Figure 2). When time elapses, stakeholder managers may come to new insights, often caused by unexpected actions of stakeholders or because stakeholders influence each other, which may lead to redoing certain activities and to an adaptation of the action plan to manage stakeholders (Activity 7).
Identify Stakeholders An important activity of deliberate stakeholder management is to identify relevant players. Identifying stakeholders comes in essence to the question “who are they?” For dealing with this question, stakeholder literature offers a variety of
1483
Identifying and Managing Stakeholders in Enterprise Information System Projects
Figure 2. Enterprise information system stakeholder management as an interactive and dynamic activity
classification models (Mantzana, Themistocleous, Irani, & Morabito, 2007; Mitchell et al., 1997; Pouloudi & Whitley, 1997; Vos & Achterkamp, 2006; Wolfe & Putler, 2002). Identifying stakeholders around IS means that a stakeholder mapper makes a subjective interpretation about who affects and who will be affected by a new information system. This means that groups or individuals have to be identified and that boundaries have to be drawn. There are various approaches to identify stakeholders, which can be regarded as complementary. One is to use frameworks (Mantzana et al., 2007), often in the form of checklists relevant to the generic types of stakeholders, such as “customers, suppliers, competitors, users, managers, and developers” (Cummings & Doh, 2000). Such checklists can be adapted to make them more suitable for particular industries, for example, doctors, patients, pharmacies, pharmaceutical companies. They can then be used to determine which specific groups of stakeholders are impor-
1484
tant in relation to the information system. For example, neurologists, patients with neurological diseases, pharmacy A, Merck. This way of identifying stakeholders is quite generic but could serve as a useful starting point. A second approach is posing questions concerning the information system and its nature. Relevant questions that may help to identify relevant groups and individuals in this context are (Cavaye, 1995; Pouloudi & Whitley, 1997): •)>> •)>> •)>> •)>> •)>> •)>> •)>> •)>>
Who are the initiators of the system? Who are the sponsors of the system? Who have to adopt the system and make it work? Who are the intended users? Who will receive the output of the system? Who are the intended developers and operators of the system? Who will be impacted and affected by the system? Who will win or lose by using the system?
Identifying and Managing Stakeholders in Enterprise Information System Projects
Answers to these questions may reveal stakeholders not yet identified by the first approach or make it possible to refine certain categories of stakeholders into relevant subgroups. A third approach that complements the other ones is the network approach. In this context, stakeholders are actors in a network because they “perform activities and/or control resources within a certain field” (Hảckanson, 1987, p. 14). This approach is very important in the diagnosis of complex information systems, because it focuses on the network of relations within and among organizations. Analyzing these dynamic networks of interactions and activities may help to identify stakeholders and their roles (Walsham, 1993). Network theory also emphasizes the dynamic, flexible, and contextual nature of complex information systems. This implies that it is not possible to draft one static stakeholder map. Stakeholders depend on a specific context that may change in time. In addition, stakeholders are related and may influence each other and therefore they cannot be viewed in isolation. Perceived interests and power of stakeholders are interactive and they may change over time (Pouloudi & Whitley, 1997).
Determine the Phases of Stakeholder Involvement An information system follows a life cycle that begins when the system is conceived and ends when it is no longer available for use. Various authors have proposed phases that can be distinguished in this life cycle (e.g., Markus. Axline, & Petrie, 2000; Turner, 2006). A typical IS life cycle includes a concept stage, a development stage, an implementation stage, an operation and maintenance stage, and a termination stage. However, in practice, these processes can be quite different and more complicated (Boonstra, 2003). Some information system life cycles can take place in a strict sequential order, sometimes phases will overlap, run in parallel, or be repeated.
Attention for the life cycle in relation to stakeholders is important because it draws attention to the point that stakeholder activity and involvement may differ over different phases. This implies that stakeholder management is a dynamic process since new players may enter the field and others may leave or move to the periphery over time. In the proposed method, these dynamics are explicitly acknowledged. Others authors also stress the dynamic nature of stakeholder identification are Mitchell et al. (1997) and Mantzana et al. (2007). However, they have not made this essential dynamics an integral part of their methods. During the conceptual phase, a reason for a certain stakeholder to propose the implementation of an information system may be that a system may strengthen its position in the organization or in the value chain, e.g. by providing more reliable information and transparency. Other stakeholders may interpret the availability of such information as a threat. During the development phase, a stakeholders’ main concern is to shape the system to fit its interests. The implementation phase can be used by stakeholders to support implementation activities or to hinder these, depending on its specific interests. During the operational phase, the main concern is to align the system with the stakeholders’ going concerns. Actions of stakeholders may also vary during the various stages, from dormant to very active and vice versa. For stakeholder managers, this may imply that it can be important to involve stakeholders actively in phases that they are dormant, to prevent for upright opposition during later stages.
Determine Stakeholders’ Roles within The project Within complex EIS projects, different stakeholders will often have different roles. For an understanding of stakeholder attitudes and responses toward EIS initiatives, it is important to attach such roles explicitly to stakeholders. Roles and responsibilities that people have are a determin-
1485
Identifying and Managing Stakeholders in Enterprise Information System Projects
Table 1. Stakeholders roles in EIS projects Owner:
Who provides resources to develop or buy the system;
Project manager:
Who monitors the performance of the project and takes action if the project does not achieve the desired progress;
User:
Who will use the system by entering data or by retrieving information from it;
Developer:
Who designs, develops, or adapts the system; this includes consultants who contribute to the IS project;
Decision maker:
Who will be involved in decisions with regard to the scope and design of the system;
Passively involved:
Who will be affected by the consequences of the system but is not willing or able to play one of the above mentioned roles.
Table 2. Objects and related issues within EIS projects Objects
Related Issues
1.
Scope of the system
Goals and business advantages of the system
2.
Business model
Work flow model and business processes
3.
Functioning enterprise
Inputs and outputs of the system
4.
System model
Human interface architecture
5.
Technology model physics
Presentation architecture
6.
Component configuration
Security of the system, compatibility
7.
System Project
Project performance, time, money
ing factor of attitudes and responses to initiatives and change ideas. Based on Turner (2006), Vos and Achterkamp (2006), and Mantzana et al. (2007), Table 1 lists the possible roles among the stakeholders in EIS projects. Vos and Achterkamp (2006) also distinguish stakeholders in being directly involved or being represented. In relation to EIS projects, this is relevant since many stakeholders are often represented by others who formally may act on their behalf. In practice, stakeholders can play different roles at the same time and roles also may change in time or during subsequent phases. For example, a project manager of a large information system project can also be business manager who eventually has to work with the system.
Determine Objects of Possible Involvement and Related Interests Stakeholders are generally interested in certain objects of the system. It is important to identify those objects as well as their interests. Objects
1486
can vary from the scope and objectives of the system to programming issues and from security to consequences for work and employment. Zachman (1999) provides a useful contribution for identifying relevant objects of involvement with his work on information system architectures. Based on this work, Table 2 identifies the objects and related issues. In many EIS projects, decision makers, such as business managers, tend to focus mainly on the first three objects, whereas developers are more directed to objects 3–6. Users, on the other hand, tend to be interested in objects 2–4. For effective stakeholder management, it is important to identify objects that are relevant for each group of stakeholders to keep them interested. When these objects are identified, it is important to determine the specific interests that are at stake in relation to these objects of the information system. Freeman (1984) uses the term “stake” interchangeably with “interest.” Stakes or interests motivate stakeholders and are important determinants of stakeholders’ priorities.
Identifying and Managing Stakeholders in Enterprise Information System Projects
Stakeholders do often have different interests and different priorities among shared interests. Interests can also be conflicting (e.g., some stakeholders may be in favor of central decision making, while others may strive towards local autonomy). Besides, it may be possible that certain interests are not realistic, meaning that they cannot be realized by the system. Because of this, stakeholders tend to follow different agendas to achieve their goals. Examples of interests of stakeholders in enterprise information systems (e.g., Doherty, King, & Al-Mushayt, 2003; Müller & Turner, 2007; Saarinen, 1996; Shenhar, Levy, & Dvir, 1997) are that the system: •)>>
•)>> •)>> •)>> •)>> •)>> •)>> •)>>
•)>>
promotes the realization of strategic objectives of the company or the business unit (scope); delivers a positive return on investment of the system (scope); will strengthen business process performance (business model); provides good value and quality of information (functioning enterprise); is easy to use and user friendly (system model and presentation architecture); promotes work satisfaction (system model); promotes employment (functioning enterprise); has a good technical performance, maintainability and security (technology model physics); project will be completed in time and within budget (system project).
The objects in which stakeholders are interested imply criteria to which they will assess EIS projects. These criteria will often by highly related to the roles that these stakeholders may have (Fowler & Walsh, 1999; Wolfe & Putler, 2002). For example, top managers (a function) will often have roles as owner/decision maker. Given this function and these roles, they will be interested
in the objects, scope and business model and will generally assess EIS projects against the criteria strategy, return on investment, business process performance, and project performance. Users of the system, on the other hand, will be focused on objects like business model and functioning enterprise and will be interested in issues like work satisfaction, employment, and user friendliness. If customers of the implementing organization are users (e.g., in case of a Web site for ordering goods or services), they may assess the system against criteria such as compatibility with their own systems, ease of use, and attractiveness of the system compared to those of alternative suppliers. These examples show that it is important to understand the roles that stakeholders play, the objects in which they have an interest, and the specific interests that they may have, as a consequence of those roles.
Assess Their Relative Importance It is also useful to categorize stakeholders by their relative salience during the various stages of the project. Salience is the degree to which managers may give priority to stakeholder claims. For classifying stakeholders, Mitchell et al. (1997) developed the stakeholder salience model. Stakeholder salience theory (Mitchell et al., 1997) categorizes stakeholders against the possession of three relationship attributes: power, legitimacy, and urgency (see Figure 3). A group has power to the extent it has access to coercive, utilitarian, or normative means for imposing its will in the relationship. Legitimacy is a social good and more over-arching than individual self-perception and is shared amongst groups, communities, or cultures. Urgency is based on time sensitivity and criticality. Combining these attributes generates eight types of stakeholder: dormant, discretionary, demanding, dominant, dependent, dangerous, definitive, and non-stakeholder (Mitchell et al., 1997). Stakeholder salience theory suggests that the amount
1487
Identifying and Managing Stakeholders in Enterprise Information System Projects
Figure 3. Stakeholder salience model (Mitchell et al., 1997)
of attention attributed to these categories should not be equal and that it can be used to give priority to competing claims of stakeholders.
Determine Degree of Involvement
Promoters of an EIS project should think about a successful project by considering the interests and relative importance of stakeholders to realize a balance of pluralist and sometimes contrasting perceived interests and power. Whether this negotiation is explicit or not, the stakeholders and their interests have to be managed to move forward and to achieve the objectives. Thus, once a decision has been made about involving a stakeholder on a certain object, the degree of involvement should be clarified. Many managers will let this depend on the salience of the stakeholder during a particular phase of the project and the interests, roles and expertise of that stakeholder. Degrees of engagement can vary (Bryson, 2004): •)>>
1488
inform (stakeholder is provided with information on the system without further real influence);
•)>>
•)>>
•)>>
•)>>
consult (stakeholder and project members exchange information and opinions and there is real openness for advice); involve (stakeholder has a main influence on the development and implementation of one or more objects of the information system); collaborate (stakeholder becomes coresponsible for the development and implementation of one or more objects of the information system); empower (stakeholder is responsible for the development and implementation of one or more objects of the information system).
Develop an Action Plan Based on the activities described above, promoters of the EIS can develop a stakeholder management action plan. Such a plan will normally consist of ideas, structures, and systems of communication and interaction with stakeholders (Cummings & Worley, 2005). It sets out what will be done to engage the various stakeholders of the project in ways that correspond with their roles, interests,
Identifying and Managing Stakeholders in Enterprise Information System Projects
and salience. Appropriate ways of communication with the range of stakeholders helps EIS project managers to become sensitive and to know the parties that are supportive and the ones that are indifferent, critical, or opposed to the development and implementation of a new enterprise information system. It also helps the project leader to become credible and to communicate with stakeholders in appropriate ways.
Method and Case Study Based on these stakeholder management activities a generic form of the systematic stakeholder management approach can be developed (see Table 3). This table can be used to identify stakeholders (column1), to determine in what phases of the project they may play a role (column 2), to determine the roles they have within each phase
(column 3), their interests and objects of involvement (column 4), their salience (column 5), and the possible degree of involvement throughout the various phases (column 6). Based on such an analysis, a specific action plan has to be developed. Table 3 implies that some stakeholders can become participants during certain phases. Participants are subsets of stakeholders who are actually given the chance to participate in concept, development, implementation, or operation activities (Markus & Mao, 2004). Other stakeholders may only be informed or consulted. Some groups may be more able and willing to participate in certain stages than others. Project managers will tend to provide opportunities to participate to the more salient stakeholders (column 5). Involvement and participation activities (column 7) may differ throughout the various phases and can be more and less intensive.
Table 3. Stakeholder management method 1
2
3
4
5
6
7
Stakeholder
Involvement in phase
Roles
Objects of involvement & related interests
Relative importance
Degree of involvement
Specific action plan
Pouloudi & Whitley (1997) Vos & Achterkamp (2006) Bryson (2004) Mantzana et al. (2007)
Markus et al. (2000)Turner (2006)
Turner (2006) Vos & Achterkamp (2006) Mantzana et al. (2007)
Zachman (1999) Shenhar et al. (1997) Saarinen (1996) Doherty et al. (2003) Müller & Turner (2007)
Mitchell et al. (1997)
Burke (1997) Bryson (2004)
Name of individual, unit, or organization
Phases: Concept Development Implementation Operation Maintenance Termination
Roles: Owner Project manager User Developer Decisionmaker Passively involved Direct involved or Representative
Objects: Scope of the system Business model Functioning enterprise System model Technology model physics Component configuration System project
Sailence is based on: Legitimacy Urgency Power Stakeholder can be: Dormant Discretionary Demanding Dominant Dependent Dangerous Definitive or Non stakeholder
Modes of Involvement: Inform Consult Involve Collaborate Empower
Interests: Strategy Return on investment Business process performance Work satisfaction and employment User friendliness Technical performance and security Quality of information Project performance
Cummings & Worley (2005)
1489
Identifying and Managing Stakeholders in Enterprise Information System Projects
Examples of such activities are discussion meetings, requirements generation, testing, training sessions, information sessions, and brainstorming sessions. Various methods and techniques can be used to engage stakeholders, such as working via prototypes, Joint Application Development (JAD), brainstorming sessions, and forums. The following case study shows how the method has been applied during the development of an electronic patient record in a regional health care system. The description does not intend to provide a complete stakeholder management approach; it only gives an impression of how the method can be used in practice This electronic patient record was proposed by the management of a large hospital. The system was intended to enable hospital doctors, hospital departments, GPs, and pharmacies to share medical data. Within this concept, authorized users could access patient data from any location, and the system would conduct automated checks for drug and allergy interactions, record clinical notes and prescriptions, schedule appointments, and distribute laboratory data. The hospital’s Information Systems (IS) department would manage the system; costs would be shared; and a representative council would provide control. Since the management of the hospital felt that this would be a complex project, not only from a technical viewpoint but also from a contextual perspective, they decided to explore in depth how this context is in terms of stakeholders by using the method as explained above. Promoters of the system first identified the most relevant stakeholders by using a combination of methods that are described in the previous section. These promoters included a manager of the hospital responsible for information systems, the IS director, and an external consultant. For each stakeholder group, the other cells were also completed, in most cases per phase or per object. This led to a preliminary action plan (column 7 of Table 4 references specific action plans). (Table 4 only shows a part of the table to give an impression of the use of the method.) 1490
Completion of this table led to extensive discussions about the identification of stakeholders (who are they), but also interests, power, roles, and ways to include their interests in the concept and development of the system. During the concept phase, two sessions of 2 hours were held to develop this table and make a first sketch of an action plan (consisting of references to ideas of stakeholder involvement, see column 7 of Table 4). This plan was used by the project manager, who was appointed at a later stage. During the use of the method, the project manager discovered that power and interests of some stakeholders were sometimes assessed in ways that proved to be inadequate. In other cases, certain stakeholders received too much attention from the project manager, which led to a degree of influence that did not match with their actual degree of salience. This led to adaptations of the plan throughout the project. During the activity “identification” promoters ignored the general public (and potential patients) and perceived them as “non stakeholders.” For that reason, the general public did not have a place in the stakeholder management table and were ignored by the project manager. However, press coverage in a regional newspaper paid attention to the development of an electronic patient record. Based on those articles, a patient organization asked about the reasons for the system and about privacy issues. During an evaluation, project managers agreed that a more pro-active approach to the public by providing information brochures and advertisements would have prepared the public supported the project.
Discussion The method as described combines various theories and brings them together into a practical approach for managing stakeholder relationships that can be used by managers to consider stakeholders in a systematic manner. The method helps users to link stakeholder identification and analysis with influence and intervention. The method also goes
Identifying and Managing Stakeholders in Enterprise Information System Projects
Table 4. Stakeholder management plan for electronic patient record system Stakeholder management plan for Electronic Patient Record 1
2
Stakeholder
Involvement in phase
Roles
Concept
Owner / decision maker
Hospital directors
Development Implementation Operation
3
Decision maker
Passively involved
Concept
Development
Passively involved
4
5
6
7
Objects of involvement & related interests
Relative importance
Degree of involvement
Specific action plan
Definitive
Empower
Ref 1
Involve
Ref 2
Consult
Ref 3
Inform
Ref 4
Scope of the system Business model Functioning enterprise
Strategy Return on investment
Business model Functioning enterprise
Business process performance Project performance
Functioning enterprise
Project performance System use
Functioning enterprise
Quality of information
Dormant
Business model
Technical feasibility
Demanding
System model Technology model physics Component configuration
Consult
System model Technology model physics Component configuration
Operation
Owner
Functioning enterprise System model Technology model physics Component configuration
Project manager
Development & implementation
Project manager
System project
Project performance
External consultancy
Development
Developer
Business model Functioning enterprise
Business process performance Project performance
Implementation
Concept Hospital doctors
Decision maker
Development Implementation
User
Scope of the system Business model Functioning enterprise Business model Functioning enterprise Functioning enterprise
Technical performance and security
General practitioners
Development Implementation
Ref 6
Involve
Ref 7
Empower
Ref 8
Dependent
Collaborate
Ref 9
Dependent
Collaborate
Ref 10
Definitive
Ref 11 Quality of work Autonomy Financial consequences Quality of information
Definitive
Involve
Collaborate Passively involved User
Scope of the system Business model
Autonomy Financial consequences Quality of work and information
Ref 12 Ref 13
Operation Concept
Ref 5
Dependent
Decision maker
IS department of hospital
Dominant
Discretionary
Inform Involve
Ref 14 Ref 15 Ref 16 Ref 17
1491
Identifying and Managing Stakeholders in Enterprise Information System Projects
beyond traditional ideas of user participation, since it suggests identifying all affected parties around a project. Using the method raises attention for developing strategies of managing stakeholder relationships: for example, engaging the participation of powerful supportive stakeholders and simultaneously attempting to deal with opponents through processes of negotiation, communication, and education. It is important to recognize the dynamic nature of the stakeholder map, which means that the actual stakeholder approach has to be reviewed. This is because stakeholders depend on a specific context, which may change over time in ways that are difficult to predict in advance (Pouloudi & Whitley, 1997, p. 5). One reason for this dynamic nature of the stakeholder environment is that stakeholders are often inter-related, and the effects of these relationships are hard to predict. For that reason, it is not possible to design a complete stakeholder map if only the relations between stakeholders and the project are considered and the effects of the project on their mutual relationships are ignored. In the illustrated case, one specific group of hospital doctors (dermatologists) was not cooperative in the attempt to develop one electronic patient file. They preferred to use their own system. When other specialists became aware of this resistance of the dermatologists, they also tended to become less cooperative. As a consequence, representatives of all specialties had to become involved in the development process, instead of one representative for all. That meant a major change in the stakeholder management process. This example shows how stakeholders can influence each other in unexpected ways, which may have an impact on the development process. Another reason to review a stakeholder approach is changing circumstances. For example, when project costs or project duration become higher or lower than expected initially, attitudes of stakeholders may change. In the case of the electronic patient file, hospital directors became more critical towards the system once they discovered
1492
the increasing costs of the development and the growing resistance from many other stakeholders. This led to a review of the relations around the system and to other stakeholder management approaches. The project leader had to “empower” directors again during the development process to make decisions about the further course of actions. Despite the issues mentioned above it still may be useful to design a preliminary plan on how to deal with stakeholders at the start of an EIS project. Such a plan can be used to establish stable communication structures and to identify the issues to be dealt with as well as the status of the meetings, for instance informative, consulting, or for making decisions. The method has been applied by more than 50 managers, responsible for IS projects, as well as by general managers who were a decision maker of such projects. During feedback sessions, these managers generally commented that the method helped them to consider the stakeholder environment and stimulated them to think in advance as well as during the project about stakeholders and appropriate actions. However, data analysis about their experiences is not yet sufficiently advanced to evaluate the effectiveness of the method.
Future Trends An agenda for further research can be directed at specifying how certain degrees of stakeholder involvement can lead to effective action plans (Activity 7). The approach as described in this article leaves this issue quite open. However, it is very relevant to include effective action plans into the method, which can be related to the various degrees of involvement throughout the various stages of the life cycle. Another interesting piece of future work is to address the question whether stakeholder analysis does help to produce desirable (or better) information systems. Finally, there is limited literature of how stakeholders around information systems are interrelated. Such research
Identifying and Managing Stakeholders in Enterprise Information System Projects
should provide suggestions about implications of inter-relatedness for action plans of stakeholder involvement.
Conclusion Since information systems do increasingly affect interests of many parties, it is essential for management to focus on these parties by identifying them and by assessing their interests and their capabilities to influence the design, implementation and use of the proposed system. It is a common mistake to rush the implementation of a complex information system and to ignore the interests of various parties or to take them for granted. This may lead to system failure, troubled relations with parties or other undesirable effects. IS development in complex environments is often a situation where no one is fully in charge and many are involved, affected, or have certain responsibilities (Bryson, 2004). In such a diffuse situation, it is essential to think about policies and actions that promote effective development and implementation activities. The main contribution of this article is that it suggests a practical method that helps managers to identify stakeholders as well as their interests and power and to develop possible actions to involve them or include their interests in the specifications of an information system. Taking deliberate steps to diagnose EIS initiatives and to involve relevant parties and divide benefits among stakeholders prior, during, and after implementation may prevent failure and disappointment. The method may help managers to be sensitive for their environment and to shift from a one dimensional, technological, and linear approach to system development to a multiple perspective assessment. The method is flexible and acknowledges and addresses the time dimension of stakeholder management. Stakeholders may come and go during the process and interests and power positions may
change during the design and implementation process. Because of this, the method should be used on a continuous basis to revise and monitor the positions and interests of stakeholders. A limitation of the method is that it does not address the philosophies that project managers may have towards stakeholders. Some emphasize normative, moral, or ethical views on stakeholder management (Yuthas & Dillard, 1999; Gibson, 2000), while others hold a more instrumental view that emphasizes a contingent relationship (Donaldson & Preston, 1995). Independent of those views, managers can use the method as an integrated approach that combines theories on stakeholder management and is intended to help implementers and managers think systematically about how to deal with various social groups that affect or are affected by the system. It emphasizes instrumental, pragmatic, opportunistic, as well as normative, views on stakeholder management (Hirschheim & Klein, 1989).
References Achterkamp, M. C., & Vos, F. J. F. (2006). A Framework for making sense of sustainable innovation through stakeholder involvement. International Journal of Environmental Technology and Management, 6(6), 525–538. doi:10.1504/ IJETM.2006.011895 Boddy, D. (2004). Responding to competing narratives: Lessons for project managers. International Journal of Project Management, 22(3), 225–234. doi:10.1016/j.ijproman.2003.07.001 Boonstra, A. (2003). Structure and analysis of IS decision making processes. European Journal of Information Systems, 12(3), 195–209. doi:10.1057/ palgrave.ejis.3000461
1493
Identifying and Managing Stakeholders in Enterprise Information System Projects
Boonstra, A. (2006). Interpreting an ERP implementation from a stakeholder perspective. International Journal of Project Management, 24(1), 38–52. doi:10.1016/j.ijproman.2005.06.003 Bryson, J. M. (2004). What to do when stakeholders matter. Stakeholder identification and analysis techniques. Public Management Review, 6(1), 21–53. doi:10.1080/14719030410001675722 Cavaye, A. L. M. (1995). User participation in system development revisited. Information & Management, 28(5), 311–326. doi:10.1016/03787206(94)00053-L Cummings, J. L., & Doh, J. P. (2000). Identifying who matters: Mapping key players in multiple environments. California Management Review, 42(2), 83–105. Cummings, T. G., & Worley, C. G. (2005). Organization development and change. Mason, OH: Thomson. Doherty, N. F., King, M., & Al-Mushayt, O. (2003). The impact of inadequacies in the treatment of organisational issues on information systems development projects. Information & Management, 41(1), 49–62. doi:10.1016/S0378-7206(03)00026-0 Fowler, A., & Walsh, M. (1999). Conflicting perceptions of success in an information systems project. International Journal of Project Management, 17(1), 1–10. doi:10.1016/S0263-7863(97)00063-X
Gonzalez, R., & Dahanayake, A. (2007). Responsibility in user participation in information system development. In M. Khosrow-Pour (Ed.), Managing Worldwide Operations and Communications with Information Technology (pp. 849851). Hershey, PA: IGI Publications. Hảckanson, H. (1989). Corporate technological behaviour: Co-operation and networks. London: Routledge. Hirschheim, R., & Klein, K. (1989). Four paradigms of information system development. Communications of the ACM, 32(10), 1199–1217. doi:10.1145/67933.67937 Levine, H. G., & Rossmoore, D. (1994). Politics and the function of power in a case study of IT implementation. Journal of Management Information Systems, 11(3), 115–134. Mantzana, V., Themistocleous, M., Irani, Z., & Morabito, V. (2007). Identifying healthcare actors involved in the adoption of information systems. European Journal of Information Systems, 16(1), 91–102. doi:10.1057/palgrave.ejis.3000660 Markus, M. L., Axline, S., & Petrie, D. (2000). Learning from adopters experiences with ERP: Problems encountered and success achieved. Journal of Information Technology, 15(4), 245–265. doi:10.1080/02683960010008944
Freeman, R. E. (1984). Strategic management, a stakeholder approach. Boston: Pitman.
Markus, M. L., & Mao, J. Y. (2004). Participation in development and implementation—updating and old, tired concept for today’s IS contexts. Journal of the Association for Information Systems, 5(11-12), 514–544.
Gibson, K. (2000). The moral basis of stakeholder theory. Journal of Business Ethics, 26(3), 245–257. doi:10.1023/A:1006110106408
McElroy, B., & Mills, C. (2003). Managing stakeholders. In R. J. Turner (Ed.), People in project management (pp. 99-118). Aldershot, UK: Gower. McLoughlin, I. (1999). Creative technological change. London: Routledge.
1494
Identifying and Managing Stakeholders in Enterprise Information System Projects
Mitchell, R. K., Agle, B. R., & Wood, D. J. (1997). Toward a theory of stakeholder identification and salience: Defining the principle of who and what really counts. Academy of Management Review, 22(4), 853–886. doi:10.2307/259247 Müller, R., & Turner, J. R. (2007). Matching the project managers’ leadership style to project type. International Journal of Project Management, 25(1), 21–32. doi:10.1016/j.ijproman.2006.04.003 Mumford, E. (2001). Action research: Helping organizations to change. In E. Trauth (Ed.), Qualitative Research in IS: Issues and trends (pp. 46 - 77). Hershey, PA: Idea Group Publishing. Orlikowski, W. J. (1992). The Duality of Technology: Rethinking the Concept of Technology in Organisations. Organization Science, 3(3), 398–427. doi:10.1287/orsc.3.3.398 Pinch, T. J., & Bijker, W. E. (1997). The social construction of facts and artifacts: Or how the sociology of science and the sociology of technology might benefit each other. In W. E. Bijker, T. P. Hughes, & T. J. Pinch (Eds.), The Social Construction of Technological Systems (pp. 1750), Cambridge,MA: MIT Press. Pouloudi, A., & Whitley, E. A. (1997). Stakeholder identification in inter organisational systems: Gaining insights for drug use management systems. European Journal of Information Systems, 6(1), 1–14. doi:10.1057/palgrave.ejis.3000252 Rost, J. (2004). Political reasons for failed software projects. IEEE Software, 21(6), 104–107. doi:10.1109/MS.2004.48
Saarinen, T. (1996). An expanded instrument for evaluating information system success. Information & Management, 31(2), 103–118. doi:10.1016/ S0378-7206(96)01075-0 Shenhar, A. J., Levy, O., & Dvir, D. (1997). Mapping the dimensions of project success. Project Management Journal, 28(2), 5–13. Turner, J. R. (2006). Towards a theory of project management: The nature of the project. International Journal of Project Management, 24(2), 1–3. doi:10.1016/j.ijproman.2005.11.008 Vos, F. J. J., & Achterkamp, M. C. (2006). Stakeholder identification in innovation projects: Going beyond classification. European Journal of Innovation Management, 9(2), 161–178. doi:10.1108/14601060610663550 Walsham, G. (1993). Interpreting information systems in organizations. Chichester, UK: John Wiley & Sons. Wolfe, R. A., & Putler, D. S. (2002). How tight are the ties that bind stakeholder groups? Organization Science, 13(1), 64–80. doi:10.1287/orsc.13.1.64.544 Yuthas, K., & Dillard, J. F. (1999). Ethical development of advanced technology: A postmodern stakeholder perspective. Journal of Business Ethics, 19(1), 35–49. doi:10.1023/A:1006145805087 Zachman, J. A. (1999). A framework for information systems architecture. IBM Systems Journal, 38(2/3), 454–471.
This work was previously published in Social, Managerial, and Organizational Dimensions of Enterprise Information Systems, edited by Maria Manuela Cruz-Cunha, pp. 313-328, copyright 2010 by Information Science Reference (an imprint of IGI Global).
1495
1496
Chapter 6.6
Understanding Information Technology Implementation Failure:
An Interpretive Case Study of Information Technology Adoption in a Loosely Coupled Organization Marie-Claude Boudreau University of Georgia, USA Jonny Holmström Umeå University, Sweden
Abstract This chapter uses the theory of loose coupling to explain failure in the adoption of an information technology aimed at improving collaboration across one organization’s internal boundaries. The research details an interpretive case study of a single organization, MacGregor Crane, in which relatively autonomous individuals are only loosely connected in terms of their daily interactions. The company implemented Lotus Notes© in an attempt to increase collaboration. However, this effort failed because employees in various units, particularly engineering, were reluctant to share information across unit boundaries. In light of these findings, it is suggested that the successful DOI: 10.4018/978-1-59904-570-2.ch008
implementation of a collaborative IT within a loosely coupled organization should involve the reconsideration of the organizational members’ roles and functions.
Introduction In this postindustrial era, firms are becoming more dependent on horizontal collaborations of diverse groups rather than vertical chains of command (Barley, 1996; Kellogg, Orlikowski, & Yates, 2006). To facilitate such horizontal collaborations, organizations have relied on information technologies (IT) to support coordination among peers. However, in many cases, the implementation and use of collaborative technologies have led to mixed results. This can be comprehended
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Understanding Information Technology Implementation Failure
by recognizing that the successful implementation and use of IT in an organization is greatly influenced by an organizational culture supportive of high trust, willingness to share information, and commitment to organizational goals. To this end, typical barriers to the successful adoption of IT in organizations can be found in political friction between organizational roles (Mähring Holmström, Keil, & Montealegre, 2004; Orlikowski, 1992). This chapter is based on a study conducted at MacGregor Crane, an organization in the business of developing and constructing shipboard cranes. MacGregor Crane includes a number of organizational members who largely work in parallel from one another. MacGregor Crane fits the general description of a “loosely coupled” system, a description that underlines how organizational members have great latitude in interpreting and implementing directions despite the presence of other organizational members. Weick (1979) stresses the autonomy of individuals and the looseness of the relations linking individuals in an organization. Whereas loosely coupled systems are characterized by both distinctiveness and responsiveness (Orton & Weick, 1990), a potential downside for loosely coupled systems is poor collaboration among organizational members. The IT project initiated at MacGregor Crane was aimed at dealing with this problem. The use of IT for coordination is more complex than suggested in the academic and practitioner literature (for a discussion, see Kling, 2002). Coordination, as the management of dependent activities (Crowston, 2003; Malone & Crowston, 1994), is central to organizing, and as more and more organizations become flat and outsourced, many organizations look to new technologies to help them with organizing. Looking for solutions to the problems of lack of collaboration among organizational members, MacGregor Crane turned to IT as a possible solution. MacGregor Crane decided to launch a project aiming at delivering a collaborative technology, Lotus Notes®, which
was expected to increase collaboration both within and across professional boundaries. The goal of this chapter is to explain an organization’s failure to successfully implement a technology targeted at increasing collaboration between organizational members. More specifically, our core research question asks: “Why was MacGregor Crane unsuccessful in fostering collaboration supported by Lotus Notes®?” We suggest that loose coupling (Meyer & Rowan, 1976; Weick, 1979) is a particularly appropriate theory to answer this question, as MacGregor Crane fits the general description of a “loosely coupled” organization. The chapter is structured as follows: “Literature” discusses organizational change, collaborative technology, as well as loosely coupled systems. In “Case: MacGregor Crane”, details about our inquiry at MacGregor Crane are provided. More specifically, this section describes the selected site and the research approach, followed by an account of MacGregor Crane’s Lotus Notes® implementation. A discussion of the case study findings is presented in “Discussion,” followed by concluding remarks in “Conclusion.”
Literature The relation between IT and organizational change has always been a central concern for IT practitioners and academicians. While new IT shape organizational behavior and structure, the role and meaning of IS is largely shaped by organizational circumstances. The two are inextricably intertwined: there is a reciprocal relationship between ITs and organizations, each shaping the other (see e.g., DeSanctis & Poole, 1994; Kling & Iacono, 1989; Monteiro & Hanseth, 1995; Orlikowski, 2000). In other words, contemporary organizations are entangled with technology. One cannot understand organizations without understanding technology, or understand technology without understanding organizations.
1497
Understanding Information Technology Implementation Failure
Clearly, IT have the capacity to enable change in various ways: the ways in which organizational work is executed (DeSanctis & Poole, 1994); the effectiveness and efficiency of an organization (Fiedler Teng, & Grover, 1995); the knowledge demanded for the execution of various tasks (Ehn, 1988); the organizational and occupational structure of work (Barley, 1986; Kling & Iacono, 1984; Orlikowski, 1996); and the possibilities for collaboration (Evans & Brooks, 2005; Zuboff, 1988). Collaboration and coordination, as a type of organizational change often associated with the use of IT, is of interest here. Collaborative Technology Implementations Many scholars interested in organizational communication and coordination have focused on interfirm networking and the IT infrastructures supporting it (e.g., Patrakosol & Olson, 2007). While it is accepted that innovation tends to occur in highly interacting and collaborative organizations (Miles & Snow, 1986), it should also be noted that innovation is dependent on a well-working integration of technological resources (Kodama, 1995). The impacts resulting from the implementation of collaborative technology have been investigated quite frequently by both practitioners (e.g., Kiely, 1993; Schlack, 1991) and academics (e.g., Brown, 2000; Fidas, Komis, & Avouris,, 2005; Karsten, 1999; Orlikowski, 1992; Tung & Tan, 2000; Vandenbosch & Ginzberg, 1996-97; Wong & Lee, 1998). Lotus Notes®, as a widespread collaborative technology, has gathered much coverage. Notes provides electronic messaging to improve communication; it provides shared databases to improve collaboration; and it supports calendaring and group scheduling to improve coordination. Vandenbosch and Ginzberg (1996-97), in their review of collaborative technology implementations, claimed that only a few studies have acknowledged the positive impact of these technologies on organizational collaboration. Maybe surprisingly, these authors contend, “most studies have not found substantive effects” (p. 68). From their review, Vandenbosch
1498
and Ginzberg (1996-97) concluded that four factors are necessary for the implementation of such technologies to enhance collaboration: (1) organizational members must have a need to collaborate; (2) organizational members must understand the technology and how it can support collaboration; (3) the organization provides appropriate support for the adoption, implementation, and continued use of the technology; and (4) the organizational culture supports collaboration. In another thorough and more recent account of the research that has been conducted on the impacts of collaborative technology, Karsten (1999) reviewed 18 case studies involving the implementation of Lotus Notes®. After studying these implementations according to many dimensions, Karsten concludes that the four criteria put forward by Vandenbosch and Ginzberg (1996-97) are not strong indicators of the extent to which Lotus Notes® may lead to collaboration or not. More particularly, Karsten stated that although these factors may be considered as “lessons that can be helpful in planning implementation projects, […] the evidence provided by the case studies did not support the conditions nor the belief in the inherent collaborative model [of Notes].” Karsten rather emphasizes other issues likely to influence the relationship between collaborative IT and collaboration, among which the difference between technology as a product and “technology-in-use” (Orlikowski, 2000; Orlikowski, Yates, Okamura, & Fujimoto, 1995), and the kind of “care” needed in bringing about desired changes (Ciborra, 1996). Both Orlikowski and Ciborra emphasize the need of understanding the organizational context and, to this end, loosely coupled organizations present us with a particular challenge. We need to understand the complexities involved in enacting communication and collaboration, and the forces working for and against this. To do this, we turn our attention to the theoretical perspective of loose coupling. Loosely Coupled Systems Organizational theorists refer to the relationships between two separate organizational entities as
Understanding Information Technology Implementation Failure
coupling. Coupling refers to how events in one organizational entity affect another organizational entity. Weick (1979) has discussed coupling as being based upon the number of variables shared between two separate entities; coupling may be “tight” or “loose” depending on the importance and commonality of variables. Viewing organizations as “loosely coupled” systems underlines how individual participants have great latitude in interpreting and implementing directions. In his description of loosely coupled systems, Weick (1979) stresses the autonomy of individuals and the looseness of the relations linking individuals in an organization. The central information activity is resolving the equivocality of information about the organization’s environment. This “sense making,” as described by Weick (1995), is largely done retrospectively, since one cannot make sense of events and actions until they have occurred. Current events are compared with past experience in order to construct meaning: […] the goal of organizations, viewed as sense making systems, is to create and identify events that recur to stabilize their environments and make them more predictable. A sensible event is one that resembles something that has happened before. (Weick, 1995, p. 170) The enacted environment is seen as an output of the meaning-construction process, and serves as a guide for future action. However, once the environment has been enacted and stored, people in the organization face the critical question of what to do with what they know. While shared interpretations in an organization are a compromise between stability and flexibility, some equivocal features still remain in the stored interpretations. Equivocality is central in all organizing, and people in organizations are […] people who oppose, argue, contradict, disbelieve, doubt, act hypocritically, improvise, counter, distrust, differ, challenge, vacillate, question, puncture, disprove, and expose. All of these actions embody ambivalence as the optimal compromise to deal with the
incompatible demands of flexibility and stability. (Weick, 1979, p. 229) Clearly, this is very different from mainstream organization theory, and Weick (1976) states that “people who are steeped in the conventional literature of organizations may view loose coupling as a sin or something to be apologized for (Weick, 1976, p. 6). Meyer and Rowan (1976) express fears that loose coupling can often lead to decoupling. Decoupling involves the valuing of ceremonial practices over efficiency. Management in a decoupled atmosphere will use tactics of avoidance, discretion, and overlooking to assure that individual participants maintain face. These values can sometimes be valued higher than efficiency. Loosely organized systems can suffer from a lack of shared context and thus, a lack of shared interpretations among organizational members on various work related issues. Organizations with a high degree of heterogeneous and specialized workforce, working in geographically dispersed teams, may suffer the most from this lack of shared context. If not dealt with in a proper manner, this can become an obstacle for organizational innovation. One possible way of dealing with this is to create a shared context virtually, by means of new IT. The role of IT in such an arrangement is that of a “boundary object,” as described by Bowker and Star (1999): Boundary objects are those objects that both inhabit several communities of practice and satisfy the informational requirements of each of them… [They] are weakly structured in common use and become strongly structured in individual-site use. (Bowker & Star, 1999, p. 297) As boundary objects, IT applications may be used in loosely coupled organizations as tools for promoting better coordination. The concept of loose coupling can be a potentially fruitful set of ideas to draw from when trying to make sense of collaborative technology (i.e., here a boundary
1499
Understanding Information Technology Implementation Failure
object). Accordingly, we propose to investigate the current case while taking into account the theoretical perspective of loose coupling. We expect that this perspective will be useful in understanding the organizational change related to collaboration and coordination within the organization under study.
Case: MacGregor Crane Selected Site MacGregor Crane is one of many companies which agreed to be studied by the first author’s research group. MacGregor Crane was founded at the end of the 17th century to manufacture industrial products. It now develops and manufactures shipboard cranes, and delivers them around the world. MacGregor Crane focuses exclusively on hydraulic cranes, which gradually have replaced electric cranes and now totally dominate the world market. MacGregor Crane decided early to focus on hydraulic cranes, and have had good results during the last few years. At the start of our inquiry, MacGregor Crane had 235 employees at its headquarters (of a total of 250 employees in the whole organization). The most important “professional roles” assumed by its employees included management, accounting, sales, and engineering1. Organizational members perceived these professional roles as very distinct from one another. Ties between these organizational roles were loose in that limited collaboration existed across professional boundaries. Within professional boundaries, however, ties were stronger; people in the same professional role generally collaborated and agreed on key issues. Thus, MacGregor Crane resembles an organization that Weick (1979) considers as a loosely coupled system. As suggested by Orton and Weick (1990), loose coupling combines the contradictory concepts of “connection” and “autonomy,” and should thus be regarded as a dialectical concept (Van de Ven & Poole, 1995).
1500
Research Approach To preserve a dialectical interpretation, it has been suggested that greater familiarity with a few systems was more valuable than lesser familiarity with many (Orton & Weick, 1990). With this in mind, a case study supported by a qualitative approach (Eisenhardt, 1989; Miles & Huberman, 1984) was conducted at MacGregor Crane. This case study was grounded in the interpretive epistemology (Klein & Myers, 1999; Walsham, 1995). According to Walsham (1993), this approach to information systems research is predominantly “aimed at an understanding of the context of the information system and the process over time of mutual influence between the system and its context” (p. 14). Hence, a basic ontological assumption is that there is no fixed relationship between information technology and organization. Rather, it is assumed that the dynamics of technology and organization unfold in an ongoing mutual shaping process, which is never determined by any single factor alone. The question of generalizability has often been a problematic issue for qualitative researchers (Johnson, 1997). On that respect, Walsham (1995) argues that the nature of generalization in interpretive study is clearly different from what it is in the positivist tradition. He identified four types of generalization, among which the “development of rich insight” constitutes one2. Walsham also maintains that generalizability, in the context of small numbers of case studies, relies on the plausibility and cogency of the logical reasoning used in describing the results from the cases, and in drawing conclusions from them (Walsham, 1993). In the present case, generalizability will thus be established by the plausibility and cogency of the analysis upon which rich insight will be generated. Data collection techniques included document analysis and semi-structured interviews. Documents were analyzed to provide the researchers with a better understanding of MacGregor Crane’s business situation. Two types of documents were
Understanding Information Technology Implementation Failure
considered. First, an overall IT-strategy document has been made available to the researchers. Second, documentation with regard to the Lotus Notes® implementation (project documentation, user manual, training manual, etc) were also studied. These documents provided sufficient knowledge for the researchers to ask informed questions during the interview process. Interviews were conducted on two separate rounds. First, 11 interviews were conducted during a 2-month period in 2000. Interviewees included managers, controllers, engineers, marketers, salespeople, and one secretary. These interviews included questions regarding the start of the project, the use of the Lotus Notes® application, and the problems that were encountered. A second round of interviews was conducted during another 2-month period in 2004, with 20 interviewees. These interviews included questions related to the reasons behind the abandonment of the Lotus Notes® application (at this point, Lotus Notes® had been replaced with an html-based Intranet.) The interviewees from this second round were the same as in the first round, plus additional engineers and sales personnel. Also, each interview from either round lasted between 30 and 60 minutes, and was tape-recorded and later transcribed. The content of all the interview transcripts was then read to identify issues and topics, as they were framed by organizational members. These issues and topics were then analyzed and aggregated to arrive at a set of themes that were common or recurring. All the data were then reexamined and recategorized in terms of this new set of common themes. Such an iterative analysis of data and themes allowed us to reflect better on the experiences and interpretations of the organizational members involved in this implementation. Our analysis offers insight into the dynamics behind this software implementation.
The Implementation Just like many companies in its industry, MacGregor Crane used cross-functional teams for product development, and to some degree, also for sales and marketing. These teams were composed of members from multiple functional units who joined and left the team based on their level of interest and required input. This was not considered as a problematic issue in the past, but as each functional unit became more and more specialized, the need for better coordination grew to be urgent. Because organizational members from different departments were involved to greater and lesser degrees as an idea moved through various stages of development, information needs varied for each unit and individual involved in the process. To keep all projects moving, people joining a project team were required to gain knowledge of the current project quickly and efficiently. The Lotus Notes® implementation project was launched to deal with communication and coordination issues, and to provide the organization with a shared context. While this implementation project covered a long period of time, it can be subdivided in two distinct phases.
Project’s Initial Phase During the early phase of the project, from spring 1997 to August 1998, it was anticipated that the existing work procedures and practices could be improved in three specific ways by the use of the Lotus Notes®. First, the project manager perceived a need for better dissemination of general organizational information, such as news concerning new employees, new policies, or new deals for the organization. Overall, this objective was successfully met. Second, the project manager had identified a need for a better collaboration among engineers. Because engineers were skilled in many different areas, the project manager’s ambition was for engineers to learn from each other:
1501
Understanding Information Technology Implementation Failure
There are so many areas we can improve here, and I felt that initially, a good start would be to focus on improving the way in which we handle information at this place. This is particularly important when it comes to information concerning the development work; after all, it is the core of things here. Although most engineers claimed that the Lotus Notes® would be useful to their work, they could not clearly explicate how this would be the case. Engineers were used to handling unstructured information. When asked to describe parts of their work routines and reasons behind decisions in the development process, engineers had much difficulty in doing so. Rather than identifying the most important factors they were considering in their work, they would instead provide a series of examples of individual circumstances, with no easily identifiable underlying procedures. Still, although engineers had difficulty in articulating how Lotus Notes® had increased the extent to which they collaborated with one another, they asserted that this goal had been successfully met. A third way that Lotus Notes® was expected to improve work procedures was by increasing support to the salespeople. For some time, the project manager had discussed the need for more efficient technical support for the salespeople. The project manager had perceived that sales personnel needed access to more updated and detailed technical information in their meetings with potential clients. With better information, the project manager speculated, salespeople would be better informed about the needs of each one of their potential clients, which would in turn significantly increase their chances of selling products to the client base. As one salesperson said: We [the salespeople] have seen how we can improve our work and our sales if we only had more support from the engineers. I’m not sure how that would work, we will need to sort that one out […] but we need to deal with this to stay competitive.
1502
This goal, however, was not met. The engineers did not welcome this idea of sharing technical knowledge about the company’s products, at least not in a formal way imposed by the technology. Although none of the engineers raised critical comments against the idea as it was presented, they did not contribute with any substantial information for this purpose through Lotus Notes® and were cautious about such initiative. One of them commented: I’m not saying that we’re against this idea; I’m just saying that we need to be careful before we embark on a path when we really can’t say where we are going to end up. I’m all for new technology – I’m an engineer! But I mean […] we need to consider the consequences, and as far as I can tell, nobody has really done that yet. At the end of the day, we do need to produce something. Clearly, we need to discuss things and plan ahead and the like […] but we also need to produce, we cannot just talk about it. Overall, there had not been additional collaboration between engineers and salespeople. This was disappointing, as the dissemination of general organizational information and the improved collaboration between engineers through the use of the Lotus Notes® had been quite successful. Emphasizing the first two goals that had been successfully met, the project manager expressed his positive feelings over the initial period of the project. The failure to increase collaboration between the engineers and other organizational members was rationalized by a potential lack of resources for the project. Generally satisfied with the “Lotus Notes® experience,” the project manager felt compelled to pursue the project and to step up its ambitions: I felt we had come a long way with a limited budget. Now, I would say it would be reasonable
Understanding Information Technology Implementation Failure
to assume that we would come even further with more resources available for the project. The management agreed to set up a proper budget for the continuation of this implementation project, and a consultant was hired to work full time on this project. During the next phase, the goal was to push the “Lotus Notes® experience” further, and to focus on the collaboration among organizational units.
Project’s Later Phase During the project’s later phase, which started in August 1998 and ended in November 1999, the idea of increasing collaboration between engineers and other organizational members was reinstated, although somewhat reformulated. Again, expectations were materialized through three particular goals. First, there was a desire to have the sales personnel reporting customers’ reactions and comments to the engineers. In this case, the assumed flow of collaboration was from the sales personnel to the engineers, that is, reverse to what it was in the initial phase of the project. The planned effect, therefore, was to have salespeople supporting the engineers with information about how their products were received on the market. Nevertheless, the engineers, again, resisted taking part in such collaboration. Their resistance was based on their fundamental belief that external opinions could only have marginal influence on the development of cranes; what mattered were issues concerning functionality and safety. As noted by one of the engineers: We are responsible for our products and […] you have to consider that we are dealing with high tech equipment to be used in milieu where people rely on the safety and the functionality of our equipment. Our customers cannot begin to understand all the issues involved in the development of cranes, and the same can be said about our salespeople. If we were to ask the market or the customers about how
to develop our products, I wouldn’t want to be on a construction site where that crane was used! Functionality and safety go together, you can’t separate them in the development process. We need to put these issues in focus and if we don’t, well, then we’re not doing our jobs. Thus, the engineers did not welcome the idea behind the proposed collaboration. The position the engineers took on this issue came as a surprise to the other organizational members involved in the project. A second way Lotus Notes® was expected to foster collaboration was through the sharing of key ratios between accounting personnel and sales personnel. Among the accounting group at MacGregor Crane, there were very explicit ideas about what key ratios were important and how they should be interpreted and acted upon. Key ratios were measures of success, or lack of success, in various organizational areas. For example, an important key ratio measured the sales success for specific products. Although the implementation of Lotus Notes® should have resulted in MacGregor Crane having more detailed and upto-date information about its sales through these key ratios, the expected collaboration between accountants and sales people was not realized. Lack of shared norms, along with resistance from the sales personnel to share information, contributed to this setback. Finally, a third way of cultivating collaboration was to increase partnership between accountants and managers. The project manager believed that it would be beneficial if all managers could get access to more timely information. Most managers shared his opinion and thus welcomed the idea of being able to act quicker, informed by timely information. However, despite the fact that such timely information eventually became available on Lotus Notes®, managers did not take advantage of it. The project manager believed that this had to do with the managers’ minimal experience with IT. Even though the technology was avail-
1503
Understanding Information Technology Implementation Failure
Table 1. Anticipated and unanticipated organizational outcomes Period of Project
Domain of Organizational Change
Expectations
Outcomes
Project’s Initial Phase
Organizational
Better dissemination of information
Better dissemination of information
Engineering
Increased collaboration within professional boundaries
Increased collaboration within professional boundaries
Increased collaboration across professional boundaries through the convey of technical support
Insignificant increased collaboration across professional boundaries because of lack of buy-in from the engineers
Increased collaboration across professional boundaries through the convey of customers’ feedback
Insignificant increased collaboration across professional boundaries because of lack of buy-in from the engineers
Increased collaboration across professional boundaries through the standardization of key ratios
Insignificant increased collaboration across professional boundaries because of lack of shared norms and resistance from sales personnel
Increased collaboration across professional boundaries allowing more timely organizational action
Increased collaboration across professional boundaries because of lack of buy-in from the managers
Sales and Engineering
Project’s Later Phase
Sales and Engineering
Accounting and Sales
Accounting and Managers
able for all managers, they did not use it in any substantial way. Overall, the later phase of the project encompassed efforts that did not result in any actual changes. The project was based on the idea that an increased collaboration among key organizational roles would contribute to the organization’s capability to reach its goals. This collaboration was resisted, though, from the engineers, from the sales personnel, and from the managers, as they all perceived that their work practices would be changed in a way they did not feel comfortable with. The project, as a whole, is summarized in Table 1. For each phase of the project, three domains of organizational change are highlighted with their associated expectations and outcomes.
Epilogue: Project Termination and Technology Rejection
collaboration between organizational units mediated by the Lotus Notes® application. In order to enable a certain degree of information flow between organizational members, an html-based Intranet was launched. It was developed in a hierarchical structure reflecting the organizational structure at MacGregor Crane. The current design has been more or less the same since March 2001, when the organizational units got their own links in the Intranet. One person from each department was selected to be in charge of keeping the information related to the department up to date. All suggestions for changes had to go through this person. While email addresses are included in the Intranet, there are no other means of interaction available. Organizational members rarely gave any suggestions on the content and the form of the Intranet. As some of the respondents commented:
In November 1999, MacGregor Crane realized that they were not going to establish any deep
Since I came here, I have not really seen any changes being made really. I haven’t been asked
1504
Understanding Information Technology Implementation Failure
about it either. Well, they do send out e-mails where they ask us all to come up with suggestions. We have all received them, but I haven’t replied.
ordination between its organizational units, it in fact failed to do so.
I guess we are all so used to the way in which it is designed. I guess we are not really coming up with suggestions for how to change it. I sure am not. I am just used to seeing it in its current design.
Discussion
By 2004, most organizational members seemed to feel that the current design was well working. The notion of the Intranet being a digital version of “what is already there” seemed to be a dominant view among organizational members. Moreover, there were not many people that formulate any alternatives. Security was another reason why the htmlbased Intranet was not leveraged. Because of the diversity of people working at MacGregor, many organizational units were concerned with keeping critical information out of the Intranet. Some project members were not formal employees at MacGregor Cranes, which led to a constant reflection over how much access to critical data was allowed: Now that we are working with a number of outsourced businesses, we cannot give as much access to people [working in those businesses] as we give to people who work here. And you can’t justify this by telling it like it is. I can’t go telling some contractor that they can’t get the information they are asking for since they will not be working here in 3 months, that we will be taking in someone else. I can’t tell them things like that. So you constantly need to pay attention to information flows in relation to non-MacGregor people. In general, the value of the Intranet was perceived as low. It constituted a bleak compromise to the Lotus Notes® alternative, one that did not offend any parties. Although, on the surface, MacGregor was using this technology with the potential of improving communication and co-
Overall, increased collaboration through Lotus Notes® was not realized, as the implementation project did not result in a system that was used in the way it was expected. It was clear that there were a lot of resources put into the project3, but everyone involved in it described the final results as poor. Ironically, while there was a wide support for the idea that Lotus Notes® was going to foster greater organizational collaboration and coordination, the many attempts to increase this were met with resistance every time it involved more than one organizational unit. Closer examination of our data revealed that such resistance was not that explicit to begin with. In fact, the engineers even expressed that more collaboration was welcome. Reflecting on this situation, the project manager commented: Clearly, we were a bit naive about all this. I mean, who is willing to stand up and say: ‘I don’t think collaboration is such a good idea!’ Turned out that none of the engineers did anyway. But that was their message, in effect: ‘We don’t like this idea of collaboration at all!’ Now, I don’t want to point fingers at anybody […] but if they would have been more open about their opinions we could have saved a lot of money. In fact, it appears that many organizational members had a cautious reluctance to increase collaboration through information technology. These organizational members, however, did not communicate their apprehension to the project manager. As the project manager noted, questioning the underlying idea of increased collaboration was not something that was “politically correct,” that is, this was something the engineers felt uncomfortable vocalizing. There is something
1505
Understanding Information Technology Implementation Failure
“honorific” behind a statement like “increased collaboration,” and to argue against this can be interpreted as an irrational act. Why did not the use of Lotus Notes® lead to further collaboration across professional boundaries? The four conditions suggested by Vandenbosch and Ginzberg (1996-97) to foster greater collaboration were all met to some degree. First, there was a strong need for collaboration, as it was in the nature of MacGregor Crane’s business to use cross-functional teams for product development, sales, and marketing. Second, there was a mixed understanding of the technology in the organization. Among engineers and sales people, there was a good understanding of the technology and how it could support collaboration, as training had been provided and faithfully attended by these groups. However, the management did not use the technology to any great extent, which hindered a wider collaboration. Third, there was firm support, at least at the explicit level, behind the implementation; the project manager was given additional resources necessary to pursue the project and to step up its ambitions. Finally, the collaborative culture at MacGregor Crane was relatively strong prior to the Lotus Notes® implementation, as organizational members were used to being part of teams involving multiple functional units which were dismantled and recreated over time. However, this collaborative culture was developed within professional teams, and there was not much collaboration between these teams either prior to or after the completion of the Lotus Notes® project. Overall, none of the four conditions proposed by Vandenbosch and Ginzberg (1996-97) can be invoked to explain the lack of collaboration. This uncovers the often paradoxical character of organizational life, as it is not uncommon for IT to result in unpredictable organizational consequences. The unpredictable and ubiquitous nature of IT’s organizational consequences forces us to introduce new ways of thinking about how to study, explain, and anticipate these consequences (for a discussion, see Robey
1506
& Boudreau, 1999). For this purpose, we turn our attention to the theoretical lens of loose coupling.
Organizational Collaboration in the Context of Loose Coupling Loose coupling recognizes the needs of individuals within an organizational culture to adjust their understanding of the organization before they can adjust to their changed role as individuals within the organization. This adjustment was not done by the different organizational groups at MacGregor Crane. The barriers to adjust their understanding of the organization, and also their own role within the organization, were especially evident among the engineering team. The organizational structure was, to a large extent, the product of engineeringcentered processes. As many organizational members routinely mentioned, MacGregor Crane was an engineering firm since its inception. Resolving the equivocality of information about the role and meaning of Lotus Notes® (Weick, 1979) was sought within the limits of this existing organizational structure. While the ambition from the project manager was to change the organizational structure, this organizational structure was the starting point against which organizational members would interpret and make sense of the newly implemented technology. Through sense making (Weick, 1995), users compared the situations before and after the implementation of Lotus Notes® within the organization. It is clear that these users had great latitude in interpreting and implementing directions, albeit within the boundaries of existing organizational structure. All organizational structures do not necessitate tight coupling, and some managerial initiatives, like decentralization, delegation, and professionalization, build some looseness and flexibility into such structures. For some organizations, this may be a necessary structure, especially when managers do not have the basic understanding to closely supervise specialized employees; in such a case, they will typically encourage horizontal collabo-
Understanding Information Technology Implementation Failure
rations rather than vertical chains of command (Barley, 1996; Kellogg, Orlikowski, & Yates, 2006). Looking at this project through the lens of loose coupling, we can appreciate the tension in the horizontal collaboration involving loosely coupled organizational roles. While one of the presumed strengths of a loosely coupled system is that it can adapt to its environment by relying on the collective intelligence of its constituent parts, this study illustrates how loosely coupled constituent parts may also resist change. Loose coupling exists for a good reason, and any effort to intervene, to move towards tighter coupling or further decoupling (Meyer & Rowan, 1976), has to present the organization with rational arguments. While loose coupling often occurs because of a high degree of ambiguity in the decision tasks (March, 1994), efforts to disrupt such a situation by implementing a new IT application may be interpreted as inappropriate. This was the case at MacGregor Crane, most notably in the way in which the engineers resisted the project. This tendency of resistance among the engineers was magnified by the way in which the project was managed. The project was managed in a top-down approach, guided by an overall ambition that was not reconsidered in the light of organizational resistance. A more suitable approach to the lack of shared context typical of loosely coupled organizations requires increased acknowledgment of the needs and preferences expressed by all organizational roles. The constituent parts of a loosely coupled organization could serve as the tentacles in the process of IT adaptation. This was not done at MacGregor Crane, as the implementation of Lotus Notes® was conducted such that managerial intentions were not changed in the process. Even in the light of a seemingly obvious failing course of action, the original ideas were not questioned. A similar view is presented by Mintzberg (1994), who argues that strategies that emerge from the “managerial mind” may not be as efficient as those emerging for the organization’s grass roots
(p. 241). He describes top-down strategizing as intrusive and upsetting, as episodic exercises that are more likely to introduce discontinuities and errors than to serve the organization well. His basic argument is that organizations need to promote learning at the “lowest levels” to adapt to changes in the environment. The lowest levels, in the current case, are the loosely coupled elements, for example, the various teams. Managers of the Lotus Notes® implementation were not sensitive to the various teams. A reason for the lack of collaboration resulting from this implementation can be found in the way MacGregor Crane identified loose coupling as a problem to deal with, rather than as a resource to draw from. From the point of view of “grass root” adaptation, loose coupling could be interpreted as a resource to exploit. Contrasting Weick’s ideas of loose coupling with top-down, highly control-oriented managerial narratives on how to align strategy and infrastructure in modern corporations, Ciborra (2000) describes the ideas inherent in loose coupling as an organizational ideal. He argues that IT should not be imposed on loosely coupled elements, but rather that the diversity inherent to organizations should be seen as a resource to draw from. Related to the notion of knowledge intensity of the product, it was obvious that there existed a belief among MacGregor Crane’s organizational members about core activities and peripheral activities among the loosely coupled elements of the company. Knowledge intensity, as defined by Ciborra (2000), depends upon (1) the number of actors that are sources or recipients of productrelated knowledge, and (2) the amount/complexity of the knowledge generated or required at each stage of the development, launch, and marketing of a product. Among the multiple stages of development of MacGregor Crane’s products, the engineering stage was considered to be the most important, i.e.,that is, the core activity. Moreover, this activity solely involved the engineers, and no other group. Thus, perhaps paradoxically, the
1507
Understanding Information Technology Implementation Failure
ambition to control the IT project at MacGregor Crane, along with the resistance to reconsider what organizational activities were central and what activities were peripheral, contributed to the lack of collaboration resulting from the IT implementation. Acknowledging the powerful dialectic between the needs of the organization and the needs of the individuals (or group of individuals), the idea of loose coupling underlines the dynamic relation between the loosely coupled elements on the one hand, and the organization as a whole on the other. As opposed to the linear, push/pull structuralism of the top-down hierarchical organization, loose coupling facilitates dynamic grouping of staff and physical resources for specific purposes, followed by re-coupling as needs and purposes change. In consequence, it provides recognition of the needs of individuals and groups within an organizational culture to adjust their understanding of the organization before they can adjust to their changed role as individuals within the organizational culture. In light of this research’s findings, we suggest that the successful implementation of a collaborative IT within a loosely coupled organization should involve the reconsideration of the organizational members’ roles and functions. This did not happen at MacGregor Crane.
Conclusion The problems in managing complex technology projects are not new; a number of studies have pointed to the difficulties of integrating coordination technologies into work practices, raising issues such as a lack of critical mass, inadequate training, inappropriate expectations, and structural and cultural problems (Alavi, Kayworth, & Leidner, 2005-6; Markus, 1987; Orlikowski, 1992). The later phase of the project was concerned with efforts that did not result in any actual changes, as the project as a whole was based on
1508
the idea that an increased collaboration among key organizational roles would contribute to the organization’s capability to reach its goals. This collaboration was refused from the parties involved, as they all perceived that their work practice would be changed in a way they did not feel comfortable with. In theory, collaboration was welcome among all organizational members, but in practice, people refused to change their work practices. In this project, thus, there were no wellcontrolled changes made when it came to the organizational adaptation of the Lotus Notes® application. MacGregor Crane faced a situation where resources were put into an IT project that was not well controlled. The “Lotus Notes® experience” at MacGregor Crane underscores how IT and organization are connected to each other, and how their role and their meaning depend on this connection. This connection is of crucial importance: change one element and you also change the other. Out of this change, new meaning arises, which redirects the organization. This is something that needs to be recognized and to be put on the managerial agenda. At MacGregor Crane, this was not the case. Ironically, while MacGregor Crane identified a central problem for all loosely coupled organizations as a starting point for the Lotus Notes® project, the problem concerning coordination, it was this very problem that led to the overall project failure. The loosening of the ties between organizational constituents presents us with a particular managerial challenge, and the theoretical perspective of loose coupling can be a powerful tool in the hands of IS researchers trying to gain a better understanding of the complexities involved in such situations. Nevertheless, it is not the only approach. Stucturation theory, or other theoretical approaches embedding a dialectical interpretation (Robey & Boudreau, 1999), could have also shed light on the organizational change triggered by collaborative information technology (Evans & Brooks, 2005). However, we believe that, given
Understanding Information Technology Implementation Failure
MacGregor Crane’s organizational structure, its investigation through the lens of loose coupling constituted a particularly good fit. While more empirical work is necessary to completely understand IT adaptation, we believe that this chapter offers a useful beginning. Understanding the couplings between organizational elements, and the potential for collaboration between them, allow us to learn more about the limits of IT-related organizational communication and collaboration. The current postindustrial era, fostering market globalization, rapid technological change, and shortened product life cycles, influences the way work is done and how people collaborate (Kellogg, Orlikowski, & Yates, 2006). Given that these changes are on the rise, the extent of collaboration between organizational elements and the use of IT fostering communication and collaboration will thus continue to increase over time. It is thus imperative for us to understand this phenomenon better.
Acknowledgment We are grateful for the help we had from Dan Johansson and Jenny Lundström during the second round of interviews. The second author wants to thank the Bank of Sweden Tercentenary Foundation for their support.
References Alavi, M., Kayworth, T. R., & Leidner, D. E. (20056). An empirical examination of the influence of organizational culture on knolwedge management practices . Journal of Management Information Systems, 22(3), 191–224. Barley, S. R. (1986). Technology as an occasion for structuring: Evidence from observation of CT scanners and the social order of radiology departments. Administrative Science Quarterly, 31(1), 78–108. doi:10.2307/2392767
Barley, S. R. (1996). Technicians in the workplace: Ethnographic evidence for bringing work into organization studies. Administrative Science Quarterly, 41(3), 404–441. doi:10.2307/2393937 Bowker, G. C., & Star, S. L. (1999). Sorting things out—Classification and its consequences. Cambridge, MA: MIT Press. Brown, B. (2000). The artful use of groupware: An ethnographic study of how Lotus Notes is used in practice. Behaviour & Information Technology, 19(4), 263–273. doi:10.1080/01449290050086372 Ciborra, C. U. (1996). Introduction: What does groupware mean for the organizations hosting it? In C. U. Ciborra (Ed.), Groupware & teamwork. Chichester: John Wiley & Sons. Ciborra, C. U. (2000). From alignment to loose coupling: From MedNet to www.roche.com. In C. U. Ciborra (Ed.), From control to drift. The dynamics of corporate information infrastructures (pp. 193-211). Oxford: Oxford University Press. Crowston, K. (2003). The evolution of highreliability coordination mechanisms for collision avoidance. The Journal of Information Technology Theory and Application, 5(2), 1–29. DeSanctis, G., & Poole, M. S. (1994). Capturing the complexity in advanced technology use: Adaptive structuration theory. Organization Science, 5, 121–147. doi:10.1287/orsc.5.2.121 Ehn, P. (1988). Work-oriented design of computer artifacts. Stockholm: Arbetslivscentrum and Almqvist & Wiksell International. Eisenhardt, K. M. (1989). Building theories from case study research. Academy of Management Review, 14(4), 532–551. doi:10.2307/258557 Evans, J., & Brooks, L. (2005). Understanding collaboration using new technologies: A structurational perspective. The Information Society, 21(3), 215–220. doi:10.1080/01972240490951971
1509
Understanding Information Technology Implementation Failure
Fidas, C., Komis, V., & Avouris, N. (2005). Heterogeneity of learning material in synchronous computer-supported collaborative modelling. Computers & Education, 44(2), 135–154. doi:10.1016/j.compedu.2004.02.001 Fiedler, K. D., Teng, J., & Grover, V. (1995). An empirical study of information technology enabled business process redesign and corporate competitive strategy. European Journal of Information Systems, 4(1), 17–30. doi:10.1057/ejis.1995.3 Johnson, J. L. (1997). Generalizability in qualitative research: Excavating the discourse, In J. M. Morse (Ed.), Completing a qualitative project (pp. 191-208). Thousand Oaks, CA: Sage Publications. Karsten, H. (1999). Collaboration and collaborative information technologies: A review of the evidence. The Data Base for Advances in Information Systems, 30(2), 44–65. Kellogg, K. C., Orlikowski, W. J., & Yates, J. (2006). Life in the =tTrading zone: Structuring coordination across boundaries in postbureaucratic organizations. Organization Science, 17(1), 22–44. doi:10.1287/orsc.1050.0157 Kiely, T. (1993). Learning to share. CIO, 6(15), 38–44. Klein, H. M., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Quarterly, 23(1), 67–93. doi:10.2307/249410 Kling, R. (2002). Transforming coordination: The promise and problems of information technology in coordination. In Malone, T., Olson, G., & J. B. Smith (Eds.), Coordination theory and collaboration technology. Mahwah, NJ: Lawrence Erlbaum. Kling, R., & Iacono, S. (1984). Computing as an occasion for social control. The Journal of Social Issues, 40(3), 77–96.
1510
Kling, R., & Iacono, S. (1989). The institutional character of computerized information systems. Office: Technology and People, 5(1), 7–28. doi:10.1108/EUM0000000003526 Kodama, F. (1995). Emerging patterns of innovation: Sources of Japan’s technological edge. Boston: Harvard Business School Press. Mähring, M., Holmström, J., Keil, M., & Montealegre, R. (2004). Trojan actor-networks and swift translation: Bringing actor-network theory to project escalation studies. Information Technology & People, 17(2), 210–238. doi:10.1108/09593840410542510 Malone, T., & Crowston, K. (1994). The interdisciplinary study of coordination. ACM Computing Surveys, 26(1), 87–119. doi:10.1145/174666.174668 March, J. G. (1994). A primer on decision making. How decisions happen. New York: The Free Press. Markus, M. L. (1987). Toward a ’critical mass’ theory of interactive media universal access, interdependence, and diffusion. Communication Research, 14, 520–552. doi:10.1177/009365087014005003 Meyer, J., & Rowan, B. (1976). Institutionalized organizations: Formal structure as myth and ceremony. American Journal of Sociology, 82, 341–363. Miles, M. B., & Huberman, A. M. (1984). Qualitative data analysis: A sourcebook of new methods. Newbury Park, CA: Sage Publications. Miles, R. E., & Snow, C. C. (1986). Organizations: New concepts for new forms. California Management Review, 28(3), 62–73. Mintzberg, H. (1994). The rise and fall of strategic planning. New York: Free Press.
Understanding Information Technology Implementation Failure
Monteiro, E., & Hanseth, O. (1995). Social shaping of information infrastructure: On being specific about the technology. In W. J. Orlikowski, G. Walsham, M. R. Jones, & J. I. DeGross (Eds.), Information technology and changes in organisational work (pp. 325-343). London: Chapman and Hall. Orlikowski, W. J. (1992). Learning from Notes: Organizational issues in groupware implementation. In: Proceedings of the CSCW’92 Conference on Computer-Supported Cooperative Work (pp. 362-369). New York: ACM Press. Orlikowski, W. J. (1996). Improvising organizational transformation over time: A situated change perspective. Information Systems Research, 7(1), 63–92. doi:10.1287/isre.7.1.63 Orlikowski, W. J. (2000). Using technology and constituting structures: A practice lens for studying technology in organizations. Organization Science, 11(4), 404–428. doi:10.1287/ orsc.11.4.404.14600 Orlikowski, W. J., Yates, J., Okamura, K., & Fujimoto, M. (1995). Shaping electronic communication: The metastructuring of technology in the context of use. Organization Science, 6, 423–444. doi:10.1287/orsc.6.4.423 Orton, J. D., & Weick, K. E. (1990). Loosely coupled systems: A reconceptualization. Academy of Management Review, 15, 2. doi:10.2307/258154 Patrakosol, B., & Olson, D. L. (2007). How interfirm collaboration benefits IT innovation. Information & Management, 44(1), 53–62. doi:10.1016/j. im.2006.10.003 Robey, D., & Boudreau, M.-C. (1999). Accounting for the contradictory organizational consequences of information technology: Theoretical directions and methodological implications. Information Systems Research, 10(2), 167–185. doi:10.1287/ isre.10.2.167
Schlack, M. (1991). IS puts Notes to the test. Datamation, 37(15), 24–26. Tung, L. L., & Tan, J. H. (2000). Adoption, implementation and use of Lotus Notes in Singapore. International Journal of Information Management, 20(5), 369–383. doi:10.1016/ S0268-4012(00)00029-3 Van de Ven, A. H., & Poole, M. S. (1995). Explaining development and change in organizations. Academy of Management Review, 20(3), 510–540. doi:10.2307/258786 Vandenbosch, B., & Ginzberg, M. J. (1996-1997). Lotus Notes® and Collaboration: Plus ca change… . Journal of Management Information Systems, 13(3), 65–81. Walsham, G. (1993). Interpreting information systems in organizations. Chichester: John Wiley & Sons. Walsham, G. (1995). Interpretive case studies in IS research: Nature and method. European Journal of Information Systems, 4, 74–81. doi:10.1057/ ejis.1995.9 Weick, K. E. (1976). Educational organizations as loosely coupled systems. Administrative Science Quarterly, 21, 1–21. doi:10.2307/2391875 Weick, K. E. (1979). The social psychology of organizing (2nd ed.). New York: Random House. Weick, K. E. (1995). Sense making in organizations. Thousand Oaks, CA: Sage Publications. Wong, B. K., & Lee, J.-S. (1998). Lotus Notes: An exploratory study of its organizational implications. International Journal of Management, 15(4), 469–475. Zuboff, S. (1988). In the age of the smart machine: The future of work and power. New York: Basic Books.
1511
Understanding Information Technology Implementation Failure
Endnotes 1
)>>
It should be noted that the engineering role is formally described as being organized in “design and development,” “production,” and “material administration.” We chose to include these administrative units in the same role as they are all concerned with engineering tasks and are not distinct roles in practice.
2
)>>
3
)>>
As to the other three types of generalization discussed by Walsham (1995), they are: generation of a theory, development of concepts, and development of implications in particular domains of action. The project manager did not want to state precisely how much resources were put into the project; he only stated that it was “way too much.”
This work was previously published in Innovative Technologies for Information Resources Management, edited by Mehdi Khosrow-Pour, pp. 128-144, copyright 2008 by Information Science Reference (an imprint of IGI Global).
1512
1513
Chapter 6.7
Improving Supply Chain Performance through the Implementation of Process Related Knowledge Transfer Mechanisms Stephen McLaughlin University of Glasgow, UK
abstract With the complexity of organizations increasing, it is becoming vitally important that organizations understand how knowledge is created and shared around their core business processes. However, many organizations deploy technology without due consideration for how their employees access, create, and share information and knowledge. This article explores the subject empirically through the study of how employees work with information and knowledge around a core business function— in this case a supply chain process. In order to do this, the organization needs to be viewed from a network perspective as it relates to specific business processes. Viewing the organization in this way enabled the author to see how employees’ preferred knowledge and information transfer mechanisms varied across the core process. In some cases, the identified transfer mechanisms
where at odds with the prescribed organization wide mechanisms. However, when the organization considered the employees’ preferred transfer mechanisms as part of an overall process improvement, the E2E supply chain performance was seen to improve significantly.
Introduction Organizations are waking up to the fact that the supply chain is not simply a support function for its business, but is in fact the key capability against which a competitive advantage can be developed (Kulp, Ofek, & Whitaker, 2003). An organization’s supply chain capability is now regarded as a key contributor to any organization striving to maximise competitive advantage (Toyer, 1995), and no longer is the “supply chain” simply the preserve of procurement, logistic, or manufacturing specialists (Porter & Millar, 1985).
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Improving Supply Chain Performance
As organizations start to compete within global market places, the complexity of their supply chains increase significantly. In order to address and manage the increased complexity many organizations look to enterprise “supply chain” software solutions to ensure a smooth scalable supply chain operation. This was the case with IBM’s Integrated Supply Chain (ISC) operation in their Europe, Middle East, and Africa (EMEA) region. Recent strategy initiatives had seen manufacturing and distribution for PC products handed over to third party providers. As part of the partnership agreement the manufacturing and logistics partners shared or had access to IBM data feeds thus enabling a continuous data flow from the IBM handled fulfilment front end through to the third party distribution engine. The data flowed; however, end-to-end performance began to deteriorate significantly. Whilst developing a recovery plan, the organization identified the fact that the performance issues were down to a failure to understand how employees, situated in different parts of the supply chain accessed, created, and shared information and knowledge (McLaughlin, Paton, & Macbeth, 2006). What this article will do is show how knowledge and information had to be accessed, created, and shared, and how the recovery plan, by focusing on the identified preferred knowledge and information needs at different points across the supply chain, was able to drive significant end-to-end core process improvements. Before proceeding it is important that the difference between “information” and “knowledge” as terms of reference are clearly defined in the context of this research. Although many authors and academics use the terms information and knowledge as though they are interchangeable (Fuller, 2001; Tsoukas, 2005), there is a subtle but significant difference between the two. This in effect has reduced the significance of knowledge, often reducing it to merely information, and thus the qualities of knowledge, as a classic philosophical concept are lost. In order to try to
1514
distinguish between information and knowledge, Fuller (2001) looks at the original meaning of information. “Information” was derived, during the Middle Ages, from a Latin word used to describe the process by which documents were transferred, or communicated, from one entity to another. As for “knowledge,” this was the mind’s representation of this process, which in turn was usually understood in relatively passive terms. Knowledge, in effect, was the result of the minds receptiveness to what lies outside it. So, in the context of this problem facing IBM, and for clarity in this article, the author will define information and knowledge as follows: •)>>
•)>>
Information: This is taken to mean codified data. Data that is captured and shared via hard-copy or electronic documentation, which in turn may be stored in databases or spreadsheets, or in report form. Knowledge: Simons (1945) found that according to his model humans act as information processing systems that extract “meaning structures” from information inputs through sensory organs, and store these meaning structures as new knowledge. Simons (1945) viewed that information only becomes knowledge within the context of the human mind is supported by Davenport and Prusak (1998), Fuller (2001), Von Hayek (1952), and Polanyi (1962). Accepting that knowledge creation and use is dependant on human interaction within an organization or process, Davenport and Prusak (1998) provide the most commonly accepted definition of knowledge within organizational and business research.
Knowledge is a fluid mix of framed experience, values, contextual information, and expert insight that provides a frame work for evaluating and incorporating new experiences and information. (Davenport & Prusak, 1998)
Improving Supply Chain Performance
Research Context and Methodology The core IBM supply chain processes are supported by integrated information systems such as SAP, i2, and IBM’s own DB2 database system. Certainly information in the form of performance data was available at all points across the supply chain. However, core performance, in the case of IBM, was under target. In this case, the key performance metric was the time taken to process, build, and deliver a customer’s order. IBM was quick to apply resource and executive focus to addressing the problem. However, as outlined in McLaughlin et al. (2006), if sustainable process improvement was to be achieved, a different approach would be needed to identify where best to implement performance-improving change. The first issue the organization faced in improving performance was to identify and separate the real problems from the apparent problematic symptoms. In order to do this, an end-to-end process description would need to be developed for the supply chain. This would be a significant undertaking, and not practical considering the time pressure and resource constraints. Therefore, it was decided that a process description would be defined for the core business process responsible for customer order delivery times; this process was the Order Flow Process (OFP). As information and data was available across the process, the author looked at how both information and knowledge was being used along the core order flow process (OFP). In order to do this along with defining the OFP, the author would also have to identify the key employee groups that operated along the OFP, and then determine their information and knowledge habits. By then, looking at how the employee groups are constrained in their information and knowledge habits, by organisation, technology, and people (Barson et al., 2000; McLaughlin & Paton, 2008; Skyrme & Amidon, 1997), and comparing this to how the employee groups would like to work, a list of
information and knowledge related performance improvements could be identified. The research methodology follows an action research approach in identifying best knowledge transfer practice across a complex supply chain organization. The research is exploratory in nature and a case study (Yin, 2002) methodology is used to support this line of inductive theory building. The findings presented in this article are based on data collated within and across IBM’s Integrated Supply Chain. For the purpose of the research, the author surveyed over 150 individuals working across an IBM core end-to-end business process; in this case the supply chain order flow process (OFP) was used. The author identified all the employees to be surveyed by mapping the organization’s departments and workgroups to the OFP. Once the different departments and workgroups were correctly mapped to the different parts of the OFP the respective employees were identified through the internal online directory system. The author used a semistructured questionnaire and oneto-one interviews to identify the organization’s knowledge habits with respect to the order flow business process. The online questionnaire was used to elicit a response from those employees directly involved with running the OFP, whilst the one-to-one interviews were conducted with the members of the senior management team. The content and structure of both the online questionnaire and the one-to-one interviews was identical. It was just felt that senior management would be more likely to respond to an interview than take the time to answer an online questionnaire. The structure and contents of the questionnaire/ interview is contained in an appendix at the end of this article. The analysis of the data has been used to understand the different explicit and tacit knowledge sharing habits of the workforce, and the perceived barriers that influence these habits along a core business process. The analysis also identified where along the core process the existing knowledge management approach (codified or personalised) was at odds with employee
1515
Improving Supply Chain Performance
tacit and explicit knowledge sharing habits. By understanding the different knowledge creation and sharing practices along the core process, the author has been able to develop a picture of the preferred knowledge approaches, not just by business function but more importantly by the different collaborative working groups who interact along the core business process. The information gathered through the primary research allowed the organization to refocus on how to improve knowledge and information flows in order to improve process performance (McLaughlin et al., 2006). As there is little academic research on actual barriers to information and knowledge transfer along process pathways, the author, as a former employee, relied on pre-understanding (Gummesson, 1991) of the process and organization as a valid starting point for conducting this research.
Defining the Supply Chain Process Organizations, in general, are now well aware of the components that make up their supply chain; indeed these components are often well established and embedded. However, many still struggle with the problem of effective component alignment (Day, 1994; Teece, 1998). Functionally aligned organizations may understand and individually manage their supply chain components, but performance can only be maximised once they achieve the transformation to process alignment. Process aligned organizations focus on core process performance as opposed to functional business unit performance. This is a fundamental and key change for most organizations and one that they must make in order to fully develop their supply chain capabilities (Van Weele, 2002). However, this shift in focus does not come easily to many organizations, as internal business unit boundaries can be difficult to remove (Argote & Ingram, 2000). The problem is exacerbated within complex organizations where capabilities
1516
such as manufacturing, logistics, and procurement have been outsourced, as is the case with IBM’s supply chain. IBM’s supply chain organization (ISC) is functionally aligned as opposed to process aligned. Therefore, although functional organization charts provide a good initial indication as to how the organization is structured, they do not give any real indication as to how the different departments and workgroups actually interact with the OFP. The importance of viewing the supply chain from a process, as opposed to functional alignment, is not new (Van Weele, 2002), so although the IBM ISC organization is functionally aligned, the author would need to define the organization from a process perspective. The OFP can be broken down into four basic components as shown in Table I. From the OFP components outlined in Table 1, certain departments and workgroups across the ISC can be quickly associated to the core process. However, other departments and workgroups will also have an impact, be it directly or indirectly, on the performance of the OFP. Therefore, in order to develop an accurate understanding of information and knowledge habits, an extended view of how departments and workgroups interact with the OFP would need to be developed. In order to identify how different business groups across the organization interact along to OFP the author used the IDEF methodology to map the OFP from end-to-end and then social network analysis to link ISC departments to the core process (OFP).
Mapping Employees to the Core Process Once the OFP process had been mapped, the author reviewed over 45 departmental operating manuals to see how and where they interacted with the OFP. The departments were identified from a functional overview of the ISC’s organization. Each department was assessed against the following criteria in order to insure all those
Improving Supply Chain Performance
Table 1. Order flow process key components OFP Components
Description
1. Order Receipt (OR) to Order Entry (OE).
Process for getting an order from a customer and loading into the IBM fulfilment system.
2. Order Entry (OE) to Order Drop (OD).
Process for clearing the order through the fulfilment system to a point where order is ready to “drop into manufacturing for building.”
3. Order Drop (OD) to Order Ship (OS).
Process for getting an order through manufacturing to a point where it is ready to ship.
4. Order Ship (OS) to Order Delivery (ODel).
Process for consolidating an order into a shipment and delivering the order to the customer.
impacting the OFP were flagged for inclusion in the research. 1.)>>
2.)>>
Select the department if its operational role clearly identifies it as having operational ownership of any part of the process that touches orders as they pass through the process. Select the department if its operational role clearly identifies it as having operational
3.)>>
ownership of any part of the process that can directly or indirectly impact orders in real time as they pass through the process. Select the department even if its operational role does not identify it as having operational ownership of any part of the process but where practical experience shows the department to be involved in a direct or indirect way which impacts order flow in real time.
Figure 1. Departments associated with OFP
1517
Improving Supply Chain Performance
Table 2. Workgroups linked to OFP OFP Groups
Description
OR – OE
Primarily responsible for order receipt and loading activities, and ensuring customer orders are valid prior to loading.
OE – OD
Primarily responsible for supply availability against order forecast/expectation, and demand planning.
OD – OS
Primarily responsible for order build scheduling, and ensuring manufacturing is ready from a material and resource perspective to build customer orders.
OS – ODel
Primarily responsible for ensuring orders enter the distribution phase as soon as manufacturing is complete.
E-2-E Order Management
Made up of departments that have E2E customer responsibility of order within ISC organization, but do not directly manage orders through any stage of the process.
E-2-E Re-Engineering
Not responsible for actual orders in process, but are responsible for system availability and compliance with process requirements.
E-2-E Administration
Support groups such as business controls departments that although do not directly process orders are responsible for business guidelines that in turn can impact the E2E process.
Senior Management
Responsible for operational decisions impacting order scheduling, resource allocation, and prioritising organizational and process change.
By using the selection criteria outlined above the number of relevant departments reduced from 45 to 35 in number. Through a further review of the departments’ operating manuals, and using social network analysis, a sociogram was generated to show how the respective departments related to the core OFP components. From the social network analysis, eight distinct workgroups could be identified. These groups were made up of departments that were not necessarily from the same business functions, but did share common aspects of process interaction. The eight groups are identified in Table 2. What is also important to note was the interdependent relationship the different workgroups had on each other; some departments had a direct interaction with the OFP, whereas other departments impacted the OFP through more indirect means. The interdependent relationship between the different workgroups is shown in Figure 2. By defining the end-to-end order flow process and then identifying and mapping the different workgroups to the OFP diagram, a new view of the
1518
organization emerges, based not on hierarchical structure, but on relative impact to the OFP and interdependent relationship.
Identifying Knowledge Usage across a Core Process Two key components to knowledge as generated and used within any organization are Explicit and Tacit (Nonaka & Takeuchi, 2000; Polanyi, 1958; Smith 2001). Tacit is very much dependant on the individual’s experiences and perspectives. This is difficult to capture from a systems perspective, with most knowledge management (KM) systems relying on explicit knowledge capture as the main focus. In fact, some researchers make the point that in order to improve KM efficiency an organization must focus on ICT and intelligent agents (Carneiro, 2001). According to Johannessen, Olaisen, and Olsen (2001), there is a real danger that because of the focus ICT solutions have on mainly explicit knowledge this may relegate tacit knowledge to
Improving Supply Chain Performance
Figure 2. Relationships between OFP workgroups Senior Management Group 8
Business Direction
Business Direction
Business Direction
Business Direction
E2E Order Mgmt Group 5
E2E Re-Engineering Group 6
E2E Admin Support Group 7
Customer Focus
Technical / System Support
Process / Admin Support
Order Flow Process
OR-OE Group 1
OE-OD Group 2
the background and hence a knowledge mismatch. Therefore, in order for KM systems to maximise their potential they need to be able to address the question of how to capture and work with tacit, as well as explicit knowledge, but not just through the use of ICT systems. From an organizational perspective this means understanding how knowledge becomes embedded in organizations, what form this knowledge takes, and how individuals react to and draw on it. In order to determine how information and knowledge are utilised around the OFP, Nonaka et al.’s (2000) breakdown of knowledge into tacit and explicit components needs to be considered. The reason for this is because Nonaka et al. (2000) identified four distinct knowledge transfer mechanisms that are inherent in any learning organization. The mechanisms are tacit to tacit, tacit to explicit, explicit to explicit, and explicit to tacit. Before considering the different types of knowledge transfer mechanisms, consideration must also be given the manner in which organizations can best manage the mechanisms. Hansen (1999) identified two distinct approaches: codified and personalised.
OD-OS Group 3
1.)>>
2.)>>
OS-ODel Group 4
Codified Systems (Technology Driven). The use of technology to support and manage explicit knowledge. Personalised Systems (Team Driven). The development of teams and the flow of tacit knowledge via the team dynamic.
Although Hansen (1999) suggest that an organization’s approach will be either dominantly codified or personalised, a more balanced approach where neither a codified or personalised approach is dominant can be realised (Jennex & Olfman, 2006). It is this view that is fundamental to the research presented in this article. Indeed both codified and personalised systems can be used to manage the different knowledge transfer mechanisms, as shown in Table 3.
Knowledge Gap Analysis The identified employees belonging to the eight OFP workgroups were then questioned using a mixture of online and one to one questionnaires. The response rate for the eight groups can be seen in Table 4. The questionnaire focused on identifying how employees currently accessed information and
1519
Improving Supply Chain Performance
Table 3. A codified/personalised view of knowledge transfer Knowledge Tx Mechanisms.
Approach
Description.
Tacit to Tacit
Codified
Change requests that enable better face to face interaction through the use of technology and available systems.
Personalised
Change requests that allow better face to face/ information sharing through formal/informal network development.
Codified
Change requests that improve the capture of information through improved systems interfaces.
Personalised
Change requests that improve an individual’s ability to input valuable information into appropriate systems.
Codified
Change requests that improve system-to-system data transfer.
Personalised
Change requests that look to improve how information is manually pulled from systems, reformatted, and then re-entered to different systems.
Codified
Change requests which look at improving the way systems present information in a format acceptable to the user.
Personalised
Change requests that look to improve users contextual understand of the information on systems, and their ability to analyse the said information.
Tacit to Explicit
Explicit to Explicit
Explicit to Tacit
knowledge in order to do their respective jobs. The employees were also asked to comment on how effective they believed the existing approach (codified or personalised) was in supporting them in doing their jobs. From the responses collated, the view from the employees was that the dominant approach was a codified one with the focus on
integrated enterprise systems such as SAP, i2, and so forth. Although it was felt that these systems where important to the overall supply chain operation, the dominant focus on these systems meant that individual employees and groups did not have systems that supported more effective control and interaction within their work environment. This
Table 4. Level of response to OFP survey OFP Groups
% of population responding
n
N
OR-OE
15.51%
38
245
OE-OD
25.00%
8
32
OD-OS
53.33%
16
30
OS-ODel
22.45%
11
49
E2E Management
20.59%
21
102
E2E Re-Engineering
26.00%
13
50
Administration
54.17%
13
24
Snr Managment
87.50%
7
8
Total
23.52%
127
540
n = Sample size. N = Population size
1520
Improving Supply Chain Performance
Table 5. Workgroups knowledge approach gap analysis OFP Groups
Dominant Current Approach
Desired Approach
OR – OE
Codified
Mixed (more focus on Personalised)
OE – OD
Codified
Codified
OD – OS
Codified
Mixed (more focus on Personalised)
OS – ODel
Codified
Mixed (more focus on Personalised)
E-2-E Order Management
Codified
Personalised
E-2-E Re-Engineering
Codified
Mixed (more focus on Personalised)
E-2-E Administration
Codified
Codified
Senior Management
Codified
Personalised
view supported Marwick (2001) and Johannessen et al. (2001) who both identified the fact that an over dependency on technology would result in a failure to fully address the knowledge needs of an organization. If we then consider Porter and Millar (1985) who identify knowledge as a key component for competitive advantage, a failure by any organization to fully address its knowledge needs will result in underperformance. From the analysis (Table 5) only two groups seemed to have the right knowledge system approach; OE-OD (codified systems) which mainly ensures supply is available to build before allowing an order to drop into manufacturing, and E-2-E Administration (codified systems) which ensures business control guidelines and reporting guidelines are followed. From the responses obtained from the remaining groups, there was a belief that the existing dominant knowledge system approach did not support the knowledge and information sharing needs of the employees.
Changing the Process in Line with Knowledge Needs How IBM, or any other complex organization, manages the re-alignment of supply chain relationships must surely impact both immediate
and future performance (Lee, Padmanabhan, & Whang, 1997; Troyer, 1995). Therefore, the senior management team implemented a change programme dependant on developing more effective cross-organizational working relationships in order to improve the end-to-end performance of the OFP. Performance is not simply down to the implementation of elaborate IT systems (Kotter, 1995), but requires the alignment of key personnel in an understanding of the knowledge management aspects relating to the end-to-end processes (Tsoukas, 1996; Wiig, 1997). This requires management to think about how the business operates from a process as opposed to a functional perspective (Van Weele, 2002). In order to see whether the changes driving performance improvement correlated to the desired knowledge approaches, a process optimisation team was set up which in turn was made up of key practitioners from all of the identified workgroups, with the exception of senior management (McLaughlin et al., 2006). The reason for excluding senior management from this part of the change process was because the author wanted to develop a “bottom-up” solution for change. Senior management would then be re-engaged to review and prioritise the changes for improvement in line with the organization’s strategic direction. In total, the optimisation team
1521
Improving Supply Chain Performance
Figure 3. Changes and knowledge transfer mechanisms no of changes Implemented across ofp.
30 25 20 chg es
Codified
15
Personalised
10 5 0 T->T
T->E
E->E
E->T
knowledge transfer mechanisms
identified and implemented 90 changes across the OFP over a period of 4 months. Each change was assessed to determine the type of knowledge transfer mechanism it supported, the workgroups it impacted, and the type of knowledge systems approach used to solution the change. Figure 3 shows how the implemented changes to the OFP were seen to impact the different knowledge transfer mechanisms. Figure 3 also shows the knowledge approaches that the implemented changes would drive. As expected, those changes relating to tacit to tacit transfer were implemented using personalised knowledge systems, and those changes relating to explicit to explicit transfer were implemented using codified knowledge systems. What is interesting here is the tacit to explicit and explicit to tacit transfer related change implementations. In the case of tacit to explicit knowledge transfer, the majority of changes (75%) were implemented using codified type changes. What this tells us is that the majority of changes have focused on improving system and user interfaces. The personalised aspect relates to training and personal knowledge sharing issues such as trust. With the explicit to tacit transfer mechanism, the changes were implemented mainly (66%) through personalised type changes. Most of the focus was
1522
on raising the employee’s ability to extract data from the systems, and then correctly interpreting the data. However, this information, to be of relevance to this case, needs to show how the changes impacted across the different workgroups, and the type of knowledge systems then used to implement the changes. Figures 4 and 5 show this for codified and personalised systems respectively. Note that Senior Management are not included in these findings, as there were no changes identified that directly impacted this group. The majority of codified system changes can be seen to impact OD-OS (manufacturing), OSOdel (distribution), and E2E Re-Eng (BPR). Across these three groups a significant amount of change related to explicit to explicit knowledge transfer. This in turn relates to how ICT systems transfer, manipulate, and store data between them. Within E2E Re-Eng significant focus is also placed on explicit to tacit knowledge transfer. These changes focused on removing ambiguity and improving the way the E2E Re-Eng ICT systems present OFP data. Across all groups (with the exception of E2E Re-Eng), tacit to explicit related changes were present. These related to the way in which employees accessed systems and improving the flexibility they had in data entry. This was certainly important in the case of OR-OE (front end
Improving Supply Chain Performance
Figure 4. Types of codified OFP changes change requests Impact: codification 18.00% 16.00% 14.00% 12.00% E->T
10.00%
E->E T->E
8.00%
T->T
6.00% 4.00% 2.00% 0.00% OR-OE
E2E Order Mgmt
OE-OD
OD-OS
order entry) where fulfilment specialists need to add context to orders where they related to additional customer information. In the case of ODOS (manufacturing) the tacit to explicit changes related to improving the user interfaces of the manufacturing systems. Prior to the changes, the systems provided little flexibility to change order parameters once the order had been accepted into the manufacturing system.
OS-Odel
E2E Re-Eng
E2E Admin
From a personalised system perspective, E2E Order Management and OE-OD (supply and demand planning) required no personalised system changes. This is due to the fact that the main focus of their work relies heavily on codified systems (data manipulation, storage, transfer, and retrieval). What was interesting was the significant number of personalised system related changes impacting the OD-OS (manufacturing) group.
Figure 5. Types of personalised OFP change change requests Impact: personalisation 20.00% 18.00% 16.00% 14.00% 12.00%
E->T E->E
10.00%
T->E T->T
8.00% 6.00% 4.00% 2.00% 0.00% OR-OE
E2E Order Mgmt
OE-OD
OD-OS
OS-Odel
E2E Re-Eng
E2E Admin
1523
Improving Supply Chain Performance
Figure 6. Focus of knowledge transfer improvements across OFP workgroups Focus of Change on Knowledge Transfer Mechanisms across OFP. 20.00% 18.00% 16.00% 14.00% 12.00% Codified Personalised
10.00% 8.00% 6.00% 4.00% 2.00% 0.00% OR-OE
E2E Order Mgmt
OE-OD
OD-OS
OS-Odel
E2E ReEng
E2E Admin
OFP Employee Groups
The majority of these changes directly related to improving employee ability to interpret systemgenerated data (explicit to tacit). Focus within this group was also given to improving manufacturing employees’ ability to socially network with other employees along the OFP, and supply chain in general (tacit to tacit). Figure 6 shows an overall view as to how codified and personalised related knowledge transfer changes impacted the different workgroups interacting along the OFP. The changes identified and implemented by the optimisation team were the only changes used to drive OFP end-to-end performance improvement. Over a period of 4 months, the changes drove overall end-to-end performance up by 23%. From a knowledge management perspective the changes focused on certain knowledge transfer mechanisms, with the change implementation being driven from either a codified or personalised systems approach. Table 6 compares the perceived knowledge approaches along the OFP prior to the process improvements being implemented, the desired knowledge approaches based on employee knowledge habits, and finally the actual knowledge approaches used to drive end-to-end performance.
1524
Although the senior management group did not have any changes that directly impacted them, the formation of the optimisation team and the integration of the optimisation team’s input into the existing management system allowed for better tacit to tacit and explicit to tacit knowledge transfer, which in turn was supported through a personalised system: the optimisation team and senior management review meeting (McLaughlin et al., 2006). Prior to the implementation of the OFP changes, only two workgroups believed they had the right knowledge approach to support their roles. However, on completion of the process improvements six of the eight workgroups had knowledge approaches that more closely matched their desired knowledge approach.
Conclusion Every supply chain will be different and certainly the knowledge approaches identified across IBM’s order flow process will differ from those experienced in other organizations. However, what the research identifies is the need to understand how knowledge and information is created, accessed, stored, interpreted, and shared along core business
Improving Supply Chain Performance
Table 6. Workgroups knowledge approach gap analysis OFP Groups
Current knowledge Approach
Desired Knowledge Approach
Actual Knowledge Approaches
OR – OE
Codified
Mixed (more focus on Personalised)
Mixed (more focus on Personalised)
OE – OD
Codified
Codified
Codified
OD – OS
Codified
Mixed (more focus on Personalised)
Codified
OS – ODel
Codified
Mixed (more focus on Personalised)
Mixed (more focus on Personalised)
E-2-E Order Management
Codified
Personalised
Mixed (more focus on Personalised)
E-2-E Re-Engineering
Codified
Mixed (more focus on Personalised)
Mixed (more focus on Codified)
E-2-E Administration
Codified
Codified
Mixed (more focus on Personalised)
Senior Management
Codified
Personalised
Mixed (more focus on Personalised)
processes, and not just from an organization wide, functional perspective. The research shows that where an organization looks at how knowledge and information are created and shared from a process perspective, as opposed to how they are created and shared from a functional perspective, real performance improvements can be realised. This finding supports the view of Smolnik, Kremer, and Kolbe (2005) that the real value of information and knowledge can best be understood when assessed in the context of environment (and processes) in which it is being utilised. In the case of IBM’s ISC, a high degree of focus had been placed on technology in order to drive supply chain performance. However, the findings from this case study support the belief of Marwick (2001) that technology alone cannot yet fully support all aspects of knowledge transfer and Johannessen et al. (2001) that an over dependency on technology can result in the tacit aspect of knowledge creation being overlooked. What the author also found was that the knowledge and information needs of employees will vary along the supply chain and in order to drive end to end performance, the
best results will be when changes address these knowledge and information needs. In particular, process improvements need to focus not just on codified systems, but also on personalised systems. In effect, supply chain organizations also need to be clear about where along the process codified and personalised systems need to be implemented. It is the author’s belief (supported by this research) that this cannot be done, especially in a complex organization, without directly researching the knowledge habits of the employees. The findings put forward in this article are limited in the fact that only one organization was analysed. However, the author feels that the findings identify a need to review complex supply chain processes from a knowledge perspective, with particular focus on the types of knowledge transfer mechanisms at work along the core processes. What this article sets out to do is identify a starting point from which organizations can understand how employees utilise knowledge and information at key points along the supply chain. By taking this perspective, organizations can better target knowledge and information
1525
Improving Supply Chain Performance
barriers along core processes with their process improvement initiatives and by so doing focus on changes that more effectively impact core process performance.
References Argote, L., & Ingram, P. (2000). Knowledge transfer: A basis for competitive advantage in firms. Organizational Behaviour and Human Decision Processes, 82(1), 150-169. Barson, R., et al. (2000). Inter and intra organizational barriers to sharing knowledge in the extended supply chain. In E-2000 Conference Proceeding. Carneiro, A. (2001). The role of intelligent resources in knowledge management. Journal of Knowledge Management, 5(4), 358-367. Davenport, T., & Prusak, L. (1998). Working knowledge. Boston, MA: Harvard Business Press. Day, G.S. (1994). The capabilities of market driven organizations. Journal of Marketing, 37-52. Fuller, S. (2001). Knowledge management foundations. Boston, MA: Butterworth-Heinemann Press.
Kotter, J.P. (1995, March-April). Leading change: Why transformation efforts fail. Harvard Business Review, 59-67. Kulp, S.C., Ofek, E., & Whitaker, J. (2003). Supply chain coordination. In T. Harrison, H.L. Lee, & J.L. Neale (Eds.), The practice of supply chain management: Where theory and application converge. Boston, MA: Kluwer Academic Publishing. Lee, H.L., Padmanabhan, V., & Whang, S. (1997). The bullwhip effect in supply chains. Slone Management Review, 38(3), 93-102 . Marwick, A.D. (2001). Knowledge management technology. IBM Systems Journal, 40(4), 814-831. McLaughlin, S., & Paton, R.A. (2008). Identifying barriers that impact knowledge creation and transfer within complex organisations. Journal of Knowledge Management, 12(4). McLaughlin, S., Paton, R.A., & Macbeth, D. (2006). Managing change within IBM’s complex supply chain. Management Decision, 44(8), 1002-1019. Nonaka, I., & Takeuchi, H. (1995). The knowledge creating company: How Japanese companies create the dynamics of innovation. London: Oxford Press.
Gummesson, E. (1991). Qualitative methods in management research. London: Sage Publishing.
Polanyi, M. (1954). Personal knowledge: Towards a post-critical philosophy. Chicago, IL: University of Chicago Press.
Hansen, M.T. (1999). The search transfer problem: The role of weak ties in sharing knowledge across organizational sub-units. Administrative Science Quarterly, 44, 82-122.
Porter, M.E., & Millar, V.E. (1985, Jul-Aug.). How information gives you competitive advantage. Harvard Business Review, 149-161.
Jennex, M.E., & Olfman, L. (2006). A model of knowledge management success. International Journal of Knowledge Management, 2(3), 51-68. Johannessen, J., Olaisen, J., & Olsen, B. (2001). The mismatch of tacit knowledge: The importance of tacit knowledge, the dangers of information technology, and what to do about it. International Journal of Information Management, 21(1), 3-21. 1526
Prusak, L. (2001). Where did knowledge management come from? IBM Systems Journal, 40(4), 1002-1007. Simons, R. (2005). Leavers of organizational design. Boston, MA: Harvard Business School Press.
Improving Supply Chain Performance
Skyrme, D.J., & Amidon, D.M. (1997). Creating the knowledge based business. London: Business Intelligence.
Tsoukas, H. (1996). The firm as a distributed knowledge system: A constructivist approach. Strategic Management Journal, 17, 11-25.
Smolnik, S., Kremer, S., & Kolbe, L. (2005). Continuum of context explication: Knowledge discovery through process-orientated portal. International Journal of Knowledge Management, 1(1), 27- 46.
Van Weele, A.J. (2002). Purchasing and supply chain management (3rd ed.). London: Thompson Publishing.
Teece, D.J. (1998). Capturing value from knowledge assets: The new economy, markets for know how, and intangible assets. Californian Management review, 40(3), 55-78. Tiwana, A. (2000). The knowledge management toolkit. Prentice Hall, NJ: PTR.
Von Hayek, F. (1952). The counter revolution in science. Chicago: University of Chicago Press. Wiig, K. (1997). Knowledge management: An introduction and perspective. Journal of Knowledge Management, 1(1), 6-14. Yin, R.K. (2002). Case study research (3rd ed.). London: Sage Publications.
Troyer, C.R. (1995). Smart movers in supply chain coordination. Transport and Distribution, 36(9), 55.
1527
Improving Supply Chain Performance
APPENDIX Knowledge Transfer Questionnaire Q1.)>> Does a lack of any of the following existing resources directly impact your Team’s effectiveness to communicate and use information? (Select as many answers as you feel relevant) 1. Lack of financial investment in the area. 2. Lack of time to complete tasks/access data. 3. Lack of technology to support job. 4. Lack of skills/training to support job. 5. Lack of personnel to support the job 6. I’m not aware of any resource issues. You may list any additional comments in the text box below...
Q2.)>> How do you think the current reward system (PBC contribution) impacts overall organizational performance? (Please select only one answer) 1. Encourages people to strive for personal success in meeting their goals. 2. Encourages people to look for ways to cooperate and work with their peers. 3. Does not really impact the way people work, or interact with colleagues. You may list any additional comments in the text box below...
Q3. When working within a matrix environment, what factors improve the way your team share information and knowledge? (Select as many answers as you feel relevant) 1. We need an effective e-mail communication. 2. We need a shared understanding of the job with colleagues. 3. We need to physically meet and know our colleagues. 4. We need regular face to face meetings. 5. We need shared and understood business goals. 6. None of the above.
1528
Improving Supply Chain Performance
You may list any additional comments in the text box below...
Q4. When looking for information to do their job, which statement below best suits? )>> (Please select only one answer) 1. The Organization provides and identifies specific databases and data sources for employees to do their job. 2. The Organization provides but does not actively identify specific databases and data sources for employees to do their job. 3. Employees are encouraged to look where ever they want for the necessary info / knowledge to do their job. You may list any additional comments in the text box below...
Q5. Does the deployed IT solution support the way your Team members access, create, and share knowledge throughout the organization? )>> (Please select only one answer) 1. The IT systems support the way they need to access, create, and share information. 2. The IT systems support the way they need to access and create but NOT share information. 3. The IT systems support the way they need to access but NOT create and share information. 4. The IT systems do NOT support the way they need to access, create, and share information. You may list any additional comments in the text box below...
Q6. Do you see IT legacy systems as having an impact on interorganizational transfer of information and knowledge (for example, the ability to pull information warehouse data, like info on Db2 tables, to get useful information)? (Please select only one answer) 1. Compatibility between legacy and current systems seriously impacts the way we transfer knowledge. 2. Compatibility between legacy and current systems impacts the way we transfer knowledge. 3. Compatibility between legacy and current systems does not impact the way we transfer knowledge. You may list any additional comments in the text box below...
1529
Improving Supply Chain Performance
Q 7. H o w w o u l d y o u d e t e r m i n e t h e i m p o r t a n c e o r v a l u e o f i n f o r m a tion presented to your team members in the course of their day to day job? (Please select only one answer) 1. Mainly through Team rooms/reports/databases and structured data repositories (IT systems). 2. Mainly through face to face meetings, phone or e-mail conversations (Personal contact). You may list any additional comments in the text box below...
Q8. How do you think your Team determines whether information/knowledge generated within your work group has implications for the wider organization? (Please select only one answer) 1. They rely on the IT systems to transfer the info/knowledge to different parts of the Organization. Other parts of the Organization can then decide on its value to them. 2. They look at the info/knowledge and discuss it with cross functional peers to see if additional benefit can be found. You may list any additional comments in the text box below...
Q9. When trying to find the answer to a unique problem relating to their job how do you think your employees generally proceed? )>> (Please select only one answer) 1. There are easy to find Data repositories on the system which can direct them to the right location for help 2. The Organization has nominated Subject Matter Experts who can be easily contacted for help. 3. They mainly rely on an informal network of friends and colleagues to find the answer. You may list any additional comments in the text box below...
Q10. The financial cost of setting up and running interorganizational collaboration (face to face meetings, Team rooms, intranet access, etc.) is directly impacting your ability to create and share information/knowledge through the organization. How do you feel about this statement? )>> (Please select only one answer)
1530
Improving Supply Chain Performance
1. Strongly Agree 2. Agree 3. Makes no difference 4. Disagree 5. Strongly Disagree If you strongly disagree please say why...
Q11. As part of a Function/Department, what is your approach to sharing information/knowledge (which has been developed within your area) with other Functions/Departments? )>> (Please select only one answer) 1. We share information and knowledge freely and openly with all who want it. 2. We share information and knowledge freely and openly only with those in our own Organization. 3. We share information and knowledge freely and openly only with those related to our business function. 4. We share information and knowledge openly and freely only with those who work in our Department. 5. We share information and knowledge only on a need to know basis. Please list any additional factors in the text box below...
Q12. How does physical distance, cultural differences, or language effect the way your Team members communicate and share knowledge with people throughout the organization? )>> (Please select only one answer) 1. These factors prevent them from doing an effective job. 2. These factors are present but do not prevent them from doing an effective job. 3. I am aware of the factors but do not see them having any significance to their job. 4. I am not aware of these factors at all. Please list any additional factors in the text box below...
Q13. When your Team receive information/knowledge from the different sources throughout the ISC organization, how do you think they gauge its usefulness? (Please select only one answer)
1531
Improving Supply Chain Performance
1. They only assume the information/knowledge to be useful based on prior experience of the source of the information/knowledge. 2. They assume all information/knowledge from Organizational sources to be accurate and useful. 3. They assume all information/knowledge from IT systems to be useful, but do not always accept it from colleagues unless they regard them as being reliable. 4. They assume all information/knowledge from colleagues to be useful, but do not always accept it from IT Systems unless they regard them as being reliable. You may list any additional comments in the text box below...
Q14. Do you think the way the supply chain organization is structured supports the creation and sharing of information and knowledge across the organization? )>> (Please select only one answer) 1. Strongly Agree 2. Agree 3. Makes no difference 4. Disagree 5. Strongly Disagree If you strongly disagree please say why...
Q15. When your Team members receive information/knowledge how do you think they determine the reliability of the source of the information/knowledge? )>> (Please select only one answer) 1. We automatically view all IBM sources as reliable. 2. We automatically view all ISC sources as reliable. 3. We only automatically view sources within our Function/Dept as reliable. 4. We only view sources we know personally, or who have been vouched for by a reliable source as being reliable. You may list any additional comments in the text box below...
Q15a. What percentage of the information that your Team members receive on a day-to-day basis would you consider coming from a reliable source? )>> (Please select only one answer)
1532
Improving Supply Chain Performance
1. All information/knowledge is reliable. 2. Between 0% and 20% is reliable. 3. Between 20% and 40% is reliable. 4. Between 40% and 60% is reliable. 5. Between 60% and 80% is reliable. 6. All information / knowledge is unreliable. You may list any additional comments in the text box below...
Q16. How do your Team members view the personal knowledge they have about their job and the processes they work with? )>> (Please select only one answer) 1. The more unique knowledge they have, the greater their worth to the organization. 2. The more they share their knowledge, the greater their worth to the organization. You may list any additional comments in the text box below...
Q17. The desire to protect the interests of the Dept/Function/Organization effects your deire to share information/knowledge with others. How do you feel about this statement? )>> (Please select only one answer from each section) Interests of Dept...
Interests of Function...
Interests of Organization...
1. Strongly Agree 2. Agree 3. Makes no difference 4. Disagree 5. Strongly Disagree
1. Strongly Agree 2. Agree 3. Makes no difference 4. Disagree 5. Srongly Disagree
1. Strongly Agree 2. Agree 3. Makes no difference 4. Disagree 5. Strongly Disagree
If you strongly disagree please say why...
Q18. When sharing information/knowledge with suppliers and business partners, you censor the information/knowledge in case the supplier or business partner passes this on to your competitors. How do you feel about this statement? )>> (Please select only one answer) 1. Strongly Agree 2. Agree
1533
Improving Supply Chain Performance
3. Makes no difference 4. Disagree 5. Strongly Disagree If you strongly disagree please say why....
Q19. Trusting a recipient to use your information/knowledge correctly will be a key consideration when determining the quality and quantity of information/knowledge you pass on. How do you feel about this statement? )>> (Please select only one answer) 1. Strongly agree 2. Agree 3. Makes no difference 4. Disagree 5. Strongly Disagree If you strongly disagree please say why...
Q20. In your part of the organization, are there any risks such as fear of penalty payments for poor performance, losing profits, or customer dissatisfaction associated with sharing information/knowledge? )>> (Please select only one answer from each section) Fear of Penalty Payments...
Losing Profits...
Customer Dissatisfaction...
1. Yes 2. No 3. Don’t know
1. Yes 2. No 3. Don’t know
1. Yes 2. No 3. Don’t know
You may list any additional comments in the text box below...
Q21. For you/your Team to continue sharing information/knowledge, it is important that the recipient also shares information/knowledge with you. How do you feel about this statement? )>> (Please select only one answer) 1. Strongly Agree 2. Agree 3. Makes no difference
1534
Improving Supply Chain Performance
4. Disagree 5. Strongly Disagree You may list any additional comments in the text box below...
Q22. When sharing information/knowledge with other parts of the organization which are geographically, culturally, or linguistically separated from you, you experience resistance from them in considering and using your information and knowledge (Not Invented Here Syndrome). How do you feel about this statement? )>> (Please select only one answer from each section) Separated by Geo/Physical Distance...
Separated through cultural difference...
Separated through language difference...
1. Strongly Agree 2. Agree 3. Makes no difference 4. Disagree 5. Strongly Disagree
1. Strongly Agree 2. Agree 3. Makes no difference 4. Disagree 5. Strongly Disagree
1. Strongly Agree 2. Agree 3. Makes no difference 4. Disagree 5. Strongly Disagree
If you strongly disagree please say why...
Q23. When working with suppliers or business partners, the level of collaboration between you will be directly related to their ability to perform on the same professional level as you. How do you feel about this statement? )>> (Please select only one answer) 1. Strongly Agree 2. Agree 3. Makes no difference 4. Disagree 5. Strongly Disagree If you strongly disagree please say why...
Q24. When you create new information/knowledge relating to improving the way you work (e.g., process improvements, lessons learned) the current IT Systems only provide you with the means of quickly storing or implementing your new knowledge if it is in the form and format of existing information/knowledge. How do you feel about this statement? )>> (Please select only one answer)
1535
Improving Supply Chain Performance
1. Strongly Agree 2. Agree 3. Makes no difference 4. Disagree 5. Strongly Disagree If you Strongly Disagree please say why...
Q25. When presented daily with voluminous amounts of information, how do you identify the information/knowledge that is of benefit to you? )>> (Please select only one answer) 1. I only access sources of info/knowledge which have been identified as being relevant to my job. 2. I sometimes access info/knowledge sources outside the scope of my job for info/knowledge that might improve the way we work. 3. I often access info/knowledge sources outside the scope of my job for info/knowledge that might improve the way we work. You may list any additional comments in the text box below...
Q26. How long have you worked in your current role within the ISC (If you have recently moved, since December, then the question refers to your previous role within ISC), and overall within IBM? )>> (Please select only one answer from each section) Time in ISC role
Time with IBM Less than 1 yr
Less than 6 months 6 months to 1yr. 1yr to 2 yrs Greater than 2 yrs
1yr to 2 yrs 2 yrs to 4 yrs 4yrs to 6 yrs 6 yrs to 8 yrs 8 yrs to 10 yrs Greater than 10 yrs
You may list any additional comments in the text box below...
This work was previously published in International Journal of Knowledge Management, Vol. 5, Issue 2, edited by M. E. Jennex, pp. 64-86, copyright 2009 by IGI Publishing (an imprint of IGI Global).
1536
1537
Chapter 6.8
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem with Backhauls in Enterprise Information System of Service Industry S.P. Anbuudayasankar Amrita School of Engineering, India K. Ganesh Global Business Services – Global Delivery, IBM India Private Limited, India K. Mohandas Amrita School of Engineering, India Tzong-Ru Lee National Chung Hsing University Taiwan, ROC
ABSTRACT This chapter presents the development of simulated annealing (SA) for a health care application which is modeled as Single Depot Vehicle routing problem called Mixed Vehicle Routing Problem with Backhauls (MVRPB), an extension of Vehicle Routing Problem with Backhauls (VRPB). This variant involves both delivery and pick-up customers and sequence of visiting the customers is mixed. The entire pick-up load should be taken back to depot. DOI: 10.4018/978-1-61520-625-4.ch015
The latest rapid advancement of meta-heuristics has shown that it can be applied in practice if they are personified in packaged information technology (IT) solutions along with the combination of a Supply Chain Management (SCM) application integrated with an enterprise resource planning (ERP) resulted to this decision support tool. This chapter provides empirical proof in sustain of the hypothesis, that a population extension of SA with supportive transitions leads to a major increase of efficiency and solution quality for MVRPB if and only if the globally optimal solution is located close to the center of all local optimal solutions.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
INTRODUCTION Supply chain management (SCM) processes can be classified in to two major categories: planning and execution. While planning supply chain investigates the processes related to forecasting material requirements, planning for production and distribution, and so on where as execution in supply chain focuses on the actual implementation of the supply chain plan, comprising processes such as production and stock control, warehouse management, transportation, and delivery (Ballou, 1978; Lambert et al., 1998). Both planning and execution are widely considered due to its critical impact on customer service, cost effectiveness, and, thus, competitiveness in increasingly demanding global markets. At the same time equal importance is to be given to the service sectors like food distribution, medicine distribution, blood distribution, etc. Modern information technology (IT) and the development in software applications facilitate solving the problems related to the supply chain processes at all three decisional levels of SCM namely, strategic, tactical and operational levels. In the recent days transportation planning at the operational level received wide attention due to its complexity of the problem and normally comprises of linear programming based models. Their functionality is mainly found in most ERP systems. The operational level applications utilise mostly heuristic and meta-heuristic algorithms. Most of the Logistics Management problems have been largely addressed as Vehicle Routing Problem (VRP) particularly in the context of manufacturing industries (Laporte and Osman 1995). Modeling and analyzing the service sectors is equally important in the optimization point of view and the benefit of serving human life as well. Blood bank management is a vital, life-saving activity with some appealing characteristics as far as logistics is concerned. In general blood will be procured in large quantity from blood donation camps. A Central
1538
Blood Bank (CBB) governs the procurement, processing, storage and distribution of blood. Regional Blood Banks (RBB) facilitates in the functions of CBB. Thus logistics of blood bank consists of blood procurement, processing, cross matching, storage, distribution, recycling, pricing, quality control and outdating (Pierskalla, 1974; Cohen et al., 1979; Pierskalla and Roach, 1972; Perry, 1996). This chapter considers routing of CBBs, RBBs and blood donation camps, where as pickup occurs in blood donation camps (fresh blood) and also in the RBBs (unused blood), to be transferred to another RBB so as to meet the demand. The sequence of visiting RBB and blood donation camps is mixed. This problem fit in to the variant of VRP called Mixed Vehicle Routing Problem with Backhauls (MVRPB). The remainder of the chapter is organized as follows: The motivation of problem is explained in Section 2. The structure of blood bank supply chain is described in Section 3. The literature survey for blood bank logistics is detailed in Section 4. Detailed literature of VRPB and MVRPB is provided in Section 5. MVRPB is explained in Section 6. The lower bound of MVRPB is proposed in Section 7. Simulated Annealing for MVRPB is explained in Section 8. The computational results are detailed in Section 9. Section 10 concludes the chapter.
MOTIVATION OF THE PROBLEM In today’s highly competitive and demanding environment, the pressure on both public and private organizations is to achieve a better way to deliver values to end customers. There has been a growing recognition that the two goals, cost reduction and customer service are achieved through logistics and SCM (Houlihan, 1988; New and Payne, 1995; New, 1997; and Hines et al., 1998). Organizations have different strategies to manage various functions, based on their respective organizational circumstances. Though there are
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
differences and changes in the strategies adopted by various organizations, major functions for any organization associated with SCM are procurement, forecasting demand, capacity planning, selecting vendors and customers, distribution, and inventory management. This chapter is focused on the two issues, procurement and distribution management of SCM. To understand the relevance of SCM in government sectors, one must understand the difference between the objective of a government / public sector enterprise and that of a private sector enterprise. The objective of government / public sector enterprise is not maximization of profit solely, but also economic development of the nation (as a long term goal) and the welfare of the society; whereas a private sector enterprise is oriented towards the sole objective of maximization of profit (Gupta and Chandra, 2002). Although the objectives of the two exclusive categories of enterprises are entirely different, they share some features like the satisfaction of their respective consumers by providing them with the right product, in the right condition and at the right time, at the least cost by allocating limited resources (of the nation and/ or enterprise) for this purpose. In the government sector (in India), the SCM paradigm that can be used by the public sector organizations involved in many areas, like fertilizer production industry, steel industry, food grain procurement and distri-
butions, petroleum products, postal clearance and delivery system, import and export, banking and financial services and public health services. Out of which the public health service is the critical one. Hospitals, dispensaries and blood banks form the backbone of the health services offered by the government of India. The functioning of these organizations needs to be strengthened. Unavailability of essential drugs, blood and other medical supplies leads to crisis. So, the application of SCM for the procurement and distribution of the life saving medical drugs, blood and other medical items overcome this crisis and provide services in time for meeting social objectives and promotes welfare of the society at large.
BACKGROUND OF BLOOD BANK LOGISTICS A typical structure of a blood bank supply chain is shown in Figures 1 and 2 for a system with p blood camps, c CBBs and r RBBs. The objective is to find the routes covering CBBs, RBBs and blood camps for blood distribution and collection (procurement) with a fleet of capacitated homogeneous vehicles.
Figure 1.
1539
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
Figure 2.
Blood Collection and Distribution Problem A distinctive blood bank has to make sure the availability of blood and should be able to deliver it as and when required, not only the quantity but the required blood group to the RBBs, which is highly uncertain. Though it represents the classical VRP, the practical state of affairs includes the complexity of mixed delivery and pickup of the vehicles along its route. Figure 3 represents the exact situation.
Figure 3.
1540
Blood camps (PP) will deliver blood to Central Blood Bank (C). We can categorize the Regional Blood Banks in two different ways. One is Regional Blood Bank with only Delivery (RD) and the other is RBB with only Pick-up (RP). From the RD, unused Blood will be taken from that node which will be delivered to C. So the flow of the vehicle is considered to be for both the purposes. Figure 3 represent the situation clearly which depict the lack of integration among the CBB, RBBs and blood camps.
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
The task is to find a route pattern from CBB to all RBBs for distribution and collection of blood and to all blood camps for collection of fresh blood. Typically, a vehicle can start from a CBB, distribute blood to the RBBs, and also covers certain RBBs and blood camps in the middle of distribution to collect the blood. Figure 4 represent the route that covers both pickup and delivery nodes, where the CBB, RBBs and blood donation camps are integrated. This problem can be represented as Mixed Vehicle Routing Problem with Backhauls (MVRPB). The assumption made here is that blood cannot be transferred from one RBB to another. All blood must either originate from or end up at the CBB.
LITERATURE BACKGROUND FOR BLOOD BANK LOGISTICS During the last 30 years, significant contributions have been made in addressing many blood management problems, especially the tactical and operational type at the hospital and regional level (Prastacos, 1984). These contributions include both interesting theoretical results and the successful implementation of the systems that was designed on the basis of these results.
A number of papers are available on the statistical analysis of demand and usage of blood in hospital and regional level, which is a vital input for blood distribution management (Elston and Pickrel, 1963; Rabinowitz and Valinsky 1970; and Brodheim and Prastacos, 1980). Ordering is one of the blood bank’s most important policies which have an impact in distribution. The policy determines the frequency and the size of the orders placed by the inventory location (hospital) to the regional centers. Both analytical (Nahmias, 1977; Cohen, 1976; and Cosmetatos and Prastacos, 1981) and empirical research (Jennings, 1973; Brodheim et al. 1976 and Cohen and Pierskalla 1979) were attempted by several researchers. Collection and Distribution planning system involves targets for average daily collection (Prastacos and Brodheim, 1980), a forecasting system to predict the quantity of supply from different sources (Frankfurter et al., 1974), an information system (Prastacos and Brodheim, 1979 and Kendall, 1980) with an extensive data base and effective data base management and a routing and scheduling algorithm to schedule visits so as to achieve the desired targets. But only few papers addressed the issues of routing and scheduling algorithm. Pegels et al. (1975) developed an interactive planning and scheduling system for hospital level. Brodheim
Figure 4.
1541
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
and Prastacos (1979) designed the decision support system namely Programmed Blood Distribution System (PBDS) focused mainly on the policy and inventory management issues. Prastacos (1981) devised the optimal policy by considering outdate and shortage costs for routing and scheduling systems. Federgruen et al. (1982) extended the same by considering location-dependent costs. Some of the recent literature on the routing variants for blood bank logistics include: Ganesh and Narendran (2005); Ganesh and Narendran (2007a); Ganesh and Narendran (2007b) and Ganesh and Narendran (2007c). From this, it is evident that, there is a lot of scope in the research on routing and scheduling for blood bank distribution and collection systems.
LITERATURE BACKGROUND FOR VRPB AND MVRPB Vehicle Routing Problem with Backhauls (VRPB) For the classical VRPB, a number of exact and heuristic algorithms are available and its extensions have also been proposed considerably. Some recent research on VRPB is presented in this review. The first constructive method for the classical VRPB was proposed by Deif and Bodin (1984) which is the extension of Clarke and Wright’s (1964) savings algorithm. Goetschalckx and Jacobs-Blecha (1989) formulated the Vehicle Routing Problem with clustered Backhauls (VRPCB). They developed the heuristic method for the multi vehicle case in which the clustering as well as the routing part are solved by means of a space filling curve heuristic. Anily (1996) developed a lower bound on the optimal total cost and a heuristic solution for the VRPB. Goetschalckx and Jacobs- Blecha (1993) used a clustering method in their second paper, which is based on the generalized assignment approach
1542
proposed by Fisher and Jaikumar (1981) where the number of routes K that would be constructed in the heuristic was specified in advance. The line-haul customers and the backhaul customers are sorted according to their increasing distance from the depot and decreasing distance to the depot respectively. By solving the generalized assignment heuristics, both the customer sequences are divided into K clusters. After clustering both the customers separately, line-haul and backhaul routes are merged according to the best combination of connections that has the smallest distance, at the same time not allowing any backhaul customer before a line-haul customer is served. Toth and Vigo (1999) proposed another twophase method for the classical VRPB. They solved both symmetric and asymmetric VRPB problem using the cluster-first route-second heuristic approach. Mingozzi et al. (1999) and Toth and Vigo (1997) approached VRPB with the exact methods. Mingozzi et al. (1999) formulated the VRPB as an integer programming problem and described a procedure that computes a valid lower bound to the optimal solution cost by combining different heuristic methods for solving the dual LP-relaxation of the exact formulation. Toth and Vigo (1997) described a new (0-1) integer programming formulation of the VRPB based upon a set-partitioning approach. They used a heuristic procedure to solve the dual LP-relaxation of the integer formulation to obtain a valid lower bound to the VRPB. Wade and Salhi (2001) proposed an Ant System Algorithm for the VRPB. Ropke (2005) addressed the VRP with pickup and delivery and solved using Adaptive Large Neighborhood Search heuristic, Branch-and-Cut algorithm and Branch-and-price algorithms for the VRPB problems with time windows. Ropke and Pisinger (2006) improved their own version of the large neighborhood search heuristic (Ropke and Pisinger, 2004) to solve the VRPB. Brandao (2006) presented a new tabu search algorithm which was able to match almost
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
all the best published solutions and also found many new best solutions particularly for a large set of benchmark problems. In a nutshell Ropke and Pisinger (2006) and Brandao (2006) have proposed competing results for the benchmark data sets addressed so far. An extensive survey on VRPB and its sub classes is available in Ropke (2005) and Parragh et al. (2007). Crispim and Brandao (2001) applied the reactive tabu search and Wassan (2007) proposed hybrid operation of reactive tabu search and adaptive memory programming to solve VRPB. For more heuristics and detailed explanations for the VRPB refer the book of Toth and Vigo (2002a).
Mixed Vehicle Routing Problem with Backhauls (MVRPB) Some of the recent works on MVRPB are presented in this review. The vehicle routing problem with mixed deliveries and pickups is a challenging extension to the vehicle routing problem that lately attracted growing attention in the literature (Wassan et al. 2008). The first study for the MVRPB was presented by Deif and Bodin (1984). Casco et al. (1988) presented a load-based backhaul insertion algorithm which solves the mixed VRPB. A cluster-first route-second heuristic was adopted by Halse (1992) using a relaxed assignment problem and modified route improvement procedures. Mosheiov (1995) discussed an extension of the MVRPB, namely the pickup and delivery location problem in which the objective of the problem is the determination of the best location of the central depot. Mosheiov (1998) considered the multi vehicle case of the MVRPB in which the author proposed tour partitioning heuristics for solving the problem. Salhi and Nagy (1999) investigated an extension to the insertion based heuristics which is based on the idea of inserting more than one backhaul at a time in the MVRPB. Duhamel et al. (1997), Nanry and Barnes (2000) and Crispim and
Brandão (2005) proposed tabu search heuristic to solve MVRPB. Wade and Salhi (2002) proposed a new version of the MVRPB named as restricted VRPB (RVRPB) which aims at producing a practical compromise between the classical VRPB and the MVRPB. In this new type of the problem, mixed line-haul and backhaul customers are permitted but the position along the routes where the first backhaul may be served is restricted. The typical constraint of the problem prevents the inclusion of backhaul customers until a given percentage of the total line-haul load has been delivered. To solve this new backhaul they presented a heuristic where the user is asked to set a restriction percentage on the insertion of backhaul customers. Again Wade and Salhi (2003) proposed an ant system algorithm to find the set of routes with the least cost for the MVRPB. Aringhieri et al. (2004) provided a graph model based on an Asymmetric Vehicle Routing formulation and for MVRPB. Ropke and Pisinger (2004) presented an improved version of the large neighborhood search heuristic proposed by Ropke (2003). They have proposed two data sets for the MVRPB. The first set is based on a relaxed version of Goetschalckx and Jacobs-Blecha (1989) problems which was already studied by Halse (1992) and Wade and Salhi (2003). Authors have constructed the other data set, which was proposed by Nagy and Salhi (2003), by transforming 14 well-known Capacited VRP (CVRP) instances into MVRPB instances. Three MVRPB instances were constructed from each CVRP instance, having 10%, 25% and 50% of the customers transformed to backhaul customers. They have applied the heuristics to the data sets proposed by Dethloff (2002), Salhi and Nagy (1999) and Nagy and Salhi (2003). Tütüncü et al. (2009) presented a new visual interactive approach for the classical VRP with backhauls and its extensions. The visual interactive approach that is based on Greedy Randomised Adaptive Memory Programming Search (GRAMPS) is described.
1543
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
Ganesh and Narendran (2007a) proposed a multi-constructive heuristic to solve a new variant of VRP with sequential delivery and pick-up. The proposed methodology can also be adapted to solve MVRPB. In their next paper Ganesh and Narendran (2007b) a multi-constructive heuristic is proposed to solve a new variant of VRP with sequential delivery, pick-up and time-windows. The proposed methodology can also be leveraged to solve MVRPB. Gribkovskaia et al. (2008) developed classical construction and improvement heuristics, as well as a tabu search heuristic and tested on a number of instances derived from VRPLIB. Wassan et al. (2008) investigated two versions of the classical VRPB problems with a meta-heuristic approach based on reactive tabu search. They have designed the meta-heuristic initially for the simultaneous case and later solved the mixed case and suggested that the approach yields good results. Anbuudayasankar et al. (2008) proposed a two-phase solution methodology with the combination of clustering approach and Or-opt heuristic to solve MVRPB a case study of third party logistics service provider organization. Anbuudayasankar et al. (2009) proposed a Composite Genetic Algorithm for MVRPB. From the study it is understood that the scope for the development of meta-heuristic methodology to solve MVRPB is wide.
MIXED VEHICLE ROUTING PROBLEM WITH BACKHAULS The classical VRP is concerned with a set of customers to be served by a set of vehicles housed at a depot or distribution centre located in the same geographical region. The objective of the problem is to devise a set of vehicle routes with minimum distance in such a way that all delivery points are served. The demands of the points assigned to each route do not exceed the capacity of the vehicle that serves the specified route. The objective is
1544
to minimise the total distance travelled by all the vehicles and in turn to minimise the overall distribution cost. While VRP has received much attention from researchers in the last four decades (Toth and Vigo, 2002), variants of this problem with more and more constraints have attracted the attention of scientific community of late. One of the important extensions of the classical VRP is the VRPB. This addresses two types of customers called line-haul (delivery) and backhaul (pick-up). The critical assumption is that, on each route, all deliveries have to be made before any goods can be picked up so as to avoid rearranging the loads on the vehicle. The quantities to be delivered and picked up are fixed and known in advance and the vehicle fleet is assumed to be homogeneous (every vehicle is having the same capacity). An example for this type of problem is the distribution of mineral water from a producer to a retailer (line-haul) may be coupled with the collection of empty bottles from the retailer to the producer (backhaul). The feasible solution to the problem consists of a set of routes where all deliveries for each route are completed before any pickups are made and the vehicle capacity is not violated either by the line-haul or backhaul customers assigned to the route. The main objective of this problem is to find such a set of routes that minimises the total distance travelled. The variant highlighted here is named as MVRPB which is an extension of classical variant VRPB. Here deliveries after pickups are allowed in which the line-haul and backhaul customers are mixed along the routes. The Objective and the constraints for the MVRPB are same as in the classical VRPB apart from the fact that backhauls can be served before all line-haul customers are served. More complications arise because of this fluctuating load. In the classical VRPB it is sufficient to check that the total delivery load and the total pickup load for each route do not separately exceed the maximum vehicle capacity. But in MVRPB it is not sufficient to check this alone as the load of the vehicle either decrease or
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
increase at each customer depending on whether the customer has line-haul or backhaul load, respectively. So it is essential to check that the vehicle capacity should not exceed at any arc along the route. Once a backhaul customer is placed on a route in the classical VRPB then the route direction becomes fixed but this is not in the case of the MVRPB where direction may not necessarily be fixed by the inclusion of a backhaul customer. Hence, feasibility needs to be checked in both the directions. MVRPB can be stated as follows: A set of N nodes with deterministic demands for delivery and pick-up services that have to be visited by a ﬇eet of homogeneous vehicles in a pure mixed sequence, all of which originate and terminate at the same node (depot). The objective of MVRPB is to find optimal routes for an entire fleet of vehicles. The following assumptions are made: •)>> •)>> •)>> •)>> •)>> •)>> •)>>
•)>>
•)>>
all routes start and end at the node of origin, also known as depot each node in N is visited and served exactly once demand at any node shall never exceed the vehicle capacity Q all vehicles have the same capacity and are stationed at the node of origin split delivery is not permitted each vehicle makes exactly one trip all delivery quantities are loaded at the depot and all quantities picked up must be unloaded at the depot every route should start with a pure delivery node, then cover a mix of delivery and pick-up nodes to satisfy load feasibility. no route shall comprise pick-up nodes exclusively
With VRP itself being NP-hard (Aarts and Lenstra, 1997) and mixed sequence of delivery and pick-up being harder constraints, MVRPB is clearly NP-hard.
LOWER BOUND FOR THE PROBLEM Consider an optimal solution, which consists of partitions (N1,N2,….NK) of the set of N nodes assigned to different vehicles (Mosheiov, 1994; Haimovich and Rinnooy Kan, 1985). Notations L )>> Ri )>> R )>>
Z* )>> ZH )>>
Length of the optimal tour through the nodes in subset Nj. Euclidean distance between the location of node i and CBB o. Average Euclidean distance for all locations. Cost corresponding to the optimal solution, and Cost of tour obtained by for any given heuristic H.
Proposition 1: A lower bound on the cost is 2 RN 3Q )>>
A proof can be seen in Figure 5. The flow chart of the research methodology is explained in Figure 6.
SIMULATED ANNEALING FOR MVRPB The development of SA is initiated by the seminal work of Kirkpatrick (1983). The step-wise procedure of the proposed SA for MVRPB is explained below. Step 0. Set the maximum number of generation (G) Step 1. Initialize the Population Size (PS)
1545
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
Figure 5.
Figure 6.
Step 2. Randomly generate the initial population (IP) IP = (P1, P2,……, PPS) Є N)>> Where N = Solution space Step 3. Initialize the relation of neighbourhood
1546
Step 4. Initialize the number of changeover C = m * PS)>> Where m Є {0.1}, a random number Step 5. Initialize the maximum temperature (Tmax) and minimum temperature (Tmin)
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
Step 6. Initialize the cooling rate (A); Where A Є {0.1} Step 7. Randomly select cooperator Po Step 8. Calculate the objective function for all the solution in PS and for Po Step 9. Choose the best from PS and compare with Po. The best from PS is termed as PSb If Po is the minimum value compared to PSb, place the Po as initial solution for next generation for changeover, else check for the next condition. Step 10. Check for the following condition: If Random(0,1) < (e(-D Po) / Tmax) Set PSb as initial solution for next generation for changeover. Else, retain Po as solution for next generation for changeover. Step 11. Alter the maximum temperature for each solution as Tmax = A * Tmax)>> After every iteration, increment G by 1 Step 12. Stop, if G is reached. Else, go to Step 1.
COMPUTATIONAL RESULTS Computational results for MVRPB are detailed here. The proposed SA is programmed in C++ and executed on a 1.7 GHz Pentium IV. It has been tested on 15 benchmark data sets of Anbuudayasankar et al. (2009). •)>>
15 benchmark data-sets from Anbuudayasankar et al. (2009)
This includes 15 benchmark data-sets are proposed by Anbuudayasankar et al. (2009). The node for the 15 data-sets ranges from 10 to 50 customers. The backhaul percentage for the data-sets is 50%, 66% and 80%, respectively. In order to evaluate the SA, we compute the Relative percentage Deviation (RD) for each solution. The RD of SA is defined as RD = (((OH – O*) / OH) 100%) where OH is the objective value obtained by SA and O* is the lower bound or best known solution of the variant. An average of the RD’s is then calculated for the best solutions and presented in last row of each Table. Table 1 exhibit the outcome of SA for 15 benchmark data-sets from Anbuudayasankar et al. (2009).
CONCLUSION This chapter has developed a heuristic based on SA to solve MVRPB. The standard benchmark data-sets are chosen from the literature for comparison. In few cases, results show that SA is competitive in solving MVRPB. The advantages of the proposed heuristic include short time in finding the best solution for vehicle routing and scheduling and improved management of the distribution resources to enhance ERP. Combination of this technique with the advantage of new IT technologies perform better when compared to traditional management methods and enhances in better utilization of the distribution of resources, more effective processes, customer service level
1547
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
Table 1. Comparison of SA for MVRPB Benchmark Data-sets of Anbuudayasankar et al. (2009) Name
Number of customers
Number of Backhauls
Number of Vehicles
Best Known Optimal Solution (BK)
SA Best Solution
RD from BK (%)
Data 1.1
10
25
2
83343
83343
0.00%
Data 1.2
15
16
3
88865
88865
0.00%
Data 1.3
25
10
4
109026
109026
0.00%
Data 2.1
10
38
2
112303
112303
0.00%
Data 2.2
15
25
3
118526
118526
0.00%
Data 2.3
25
15
4
132647
132647
0.00%
Data 3.1
10
25
2
119713
119713
0.00%
Data 3.2
15
16
3
136620
136620
0.00%
Data 3.3
25
10
4
165785
165785
0.00%
Data 4.1
35
38
5
162412
162412
0.00%
Data 4.2
50
25
6
198411
199382
0.49%
Data 5.1
35
15
5
222417
222417
0.00%
Data 5.2
50
50
6
202710
203458
0.37%
Data 6.1
35
33
5
220141
220141
0.00%
Data 6.2
50
20
6
286183
289623
1.20%
Average Relative Percentage Deviation (%)
0.14%
Note: Negative Symbol Deviation indicates the better performance of SA
improvements, support of the supply chain operational decisions, a strategic and tactical decision support tool as well. The research limitations are the problem size and the number of data sets. This study perhaps will make a well-built groundwork for additional augmentation of relevance of metaheuristic to solve MVRPB. SA can be enhanced using cooperative transitions. Time windows, heterogeneous capacitated vehicles and multiple objectives are the additional constraints for the future scope.
REFERENCES Anbuudayasankar, S.P., & Ganesh, K., Tzong-Ru Lee & Mohandas, K. (2009). COG: Composite Genetic Algorithm with Local Search Methods to Solve Mixed Vehicle Routing Problem with Backhauls – Application for Public Health Care System. International Journal of Services and Operations Management, 5(5), 617–636. doi:10.1504/IJSOM.2009.025117 1548
Anbuudayasankar, S. P., Ganesh, K., & Mohandas, K. (2008). CORE: a heuristic to solve vehicle routing problem with mixed delivery and pickup. ICFAI Journal of Supply Chain Management, 5(3), 7–18. Anily, S. (1996). The vehicle-routing problem with delivery and back-haul options. Naval Research Logistics, 43, 415–434. doi:10.1002/ (SICI)1520-6750(199604)43:33.0.CO;2-C Aringhieri, R., Bruglieri, M., Malucelli, F., & Nonato, M. (2004). An asymmetric vehicle routing problem arising in the collection and disposal of special waste. Electronic Notes in Discrete Mathematics, 17(20), 41–47. doi:10.1016/j. endm.2004.03.011 Ballou, R. H. (1978). Basic Logistics Management. Englewood Cliffs, NJ: Prentice-Hall. Brodheim, E., Hirsch, R., & Prastacos, G. (1976). Setting Inventory Levels for Hospital Blood Banks. Transfusion, 16(1), 63–70.
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
Brodheim, E., & Prastacos, G. P. (1980). Demand, usage and issuing of blood at hospital blood banks. Technical report. Operations Research Laboratory, The New York Blood Center. Cohen, M. A. (1976). Analysis of single critical number ordering policies for perishable inventories. Operations Research, 24, 726–741. doi:10.1287/opre.24.4.726 Cohen, M. A., & Pierskalla, W. P. (1979). Target inventory levels for a hospital blood bank or a decentralized regional blood banking system. Transfusion, 19(4), 444–454. doi:10.1046/j.15372995.1979.19479250182.x Cohen, M. A., Pierskalla, W. P., Sassetti, R. J., & Consolo, J. (1979, September-October). An overview of a hierarchy of planning models for regional blood bank management, Administrative Report. Transfusion, 19(5), 526–534. doi:10.1046/j.15372995.1979.19580059802.x Cosmetatos, G., & Prastacos, G. P. (1981). A perishable inventory system with fixed size periodic replenishments. Working Paper 80-10-01, Department of Decision Sciences, the Wharton School, University of Pennsylvania.
Dethloff, J. (2002). Relation between vehicle routing problems: an insertion heuristic for the vehicle routing problem with simultaneous delivery and pick-up applied to the vehicle routing problem with backhauls. The Journal of the Operational Research Society, 53(1), 115–118. doi:10.1057/ palgrave/jors/2601263 Duhamel, C., Potvin, J. Y., & Rousseau, J. M. (1997). A tabu search heuristic for the vehicle routing problem with backhauls and time windows. Transportation Science, 31, 49–59. doi:10.1287/ trsc.31.1.49 Elston, R., & Pickrel, J. C. (1963). A statistical approach to ordering and usage policies for a hospital blood transfusion. Transfusion, 3, 41–47. doi:10.1111/j.1537-2995.1963.tb04602.x Federgruen, A., Prastacos, G. P., & Zipkin, P. (1982). An allocation and distribution model for perishable products. Research Working Paper 392A, Graduate School of Business, Columbia University, New York. Fisher, M. L., & Jaikumar, L. N. (1981). A generalized assignment heuristic for the large scale vehicle routing. Networks, 11, 109–124. doi:10.1002/ net.3230110205
Crispim, J., & Brandao, J. (2001). Reactive tabu search and variable neighbourhood descent applied to the vehicle routing problem with backhauls. In MIC’2001 4th Metaheuristic International Conference, Porto, Portugal, July 16–20.
Frankfurter, G. M., Kendall, K. E., & Pegels, C. C. (1974). Management control of blood through a short-term supply-demand forecast system. Management Science, 21, 444–452. doi:10.1287/ mnsc.21.4.444
Crispim, J., & Brandao, J. (2005). Metaheuristics applied to mixed and simultaneous extensions of vehicle routing problems with backhauls. The Journal of the Operational Research Society, 56, 1296–1302. doi:10.1057/palgrave.jors.2601935
Ganesh, K., & Narendran, T. T. (2005). CLOSE: a heuristic to solve a precedence-constrained travelling salesman problem with delivery and pickup. International Journal of Services and Operations Management, 1(4), 320–343. doi:10.1504/ IJSOM.2005.007496
Deif, I., & Bodin, L. (1984). Extension of the Clarke and Wright algorithm for solving the vehicle routing problem with backhauling. In A. Kidder, (Ed.), Proceedings of the Babson conference on software uses in transportation and logistic management, Babson Park, (pp. 75–96).
Ganesh, K., & Narendran, T. T. (2007a). CLOVES: A cluster-and-search heuristic to solve the vehicle routing problem with delivery and pick-up. European Journal of Operational Research, 178(3), 699–717. doi:10.1016/j.ejor.2006.01.037
1549
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
Ganesh, K., & Narendran, T. T. (2007b). CLASH: a heuristic to solve vehicle routing problems with delivery, pick-up and time windows. International Journal of Services and Operations Management, 3(4), 460–477. doi:10.1504/IJSOM.2007.013466 Ganesh, K., & Narendran, T. T. (2007c). TASTE: a two-phase heuristic to solve a routing problem with simultaneous delivery and pick-up. International Journal of Advanced Manufacturing Technology, 37(11-12), 1221–1231. doi:10.1007/ s00170-007-1056-2 Goetschalckx, M., & Jacobs-Blecha, C. (1989). The vehicle routing problem with backhauls. European Journal of Operational Research, 42, 39–51. doi:10.1016/0377-2217(89)90057-X Goetschalckx, M., & Jacobs-Blecha, C. (1993). The vehicle routing problem with backhauls: properties and solution algorithms. Technical report. Georgia Institute of Technology, School of Industrial and Systems Engineering. Gupta, R.K. & Chandra, P. (2002). Integrated Supply Chain Management in the Government Environment. Technical papers and presentation, Mathematical Modelling and Simulation Division, National Informatics Center. Haimovich, M., & Rinnooy Kan, A. H. G. (1985). Bounds and Heuristics for Capacitated Routing Problems. Mathematics of Operations Research, 10(4), 527–542. doi:10.1287/moor.10.4.527 Halse, K. (1992). Modeling and solving complex vehicle routing problems. PhD thesis, Institute of Mathematical Statistics and Operations Research (IMSOR), Technical University of Denmark. Hines, P., Rich, N., Bicheno, J., Brunt, D., Taylor, D., Butterworth, C., & Sullivan, J. (1998). Value stream management. International Journal of Logistics Management, 9(1), 25–42. doi:10.1108/09574099810805726
1550
Houlihan, J. B. (1988). International supply chains: a new approach. Management Decision: Quarterly Review of Management Technology, 26(3), 13–19. doi:10.1108/eb001493 Jennings, J. B. (1973). Blood bank inventory control. Management Science, 19, 637–645. doi:10.1287/mnsc.19.6.637 Kendall, K. E. (1980). Multiple objective planning for regional blood centers. Long Range Planning, 13(4), 88–94. doi:10.1016/0024-6301(80)90084-9 Kirkpatrick, S., Gelatt, C. D. Jr, & Vecchi, M. P. (1983). Optimization by Simulated Annealing. Science, 220, 671–680. doi:10.1126/science.220.4598.671 Lambert, D. M., Cooper, M. C., & Pagh, J. D. (1998). Supply chain management: implementation issues and research opportunities. The International Journal of Logistics Management, 9(2), 1–19. doi:10.1108/09574099810805807 Laporte, G., & Osman, I. H. (1995). Routing problems: a bibliography. Annals of Operations Research, 61, 227–262. doi:10.1007/BF02098290 Mingozzi, A., Giorgi, S., & Baldacci, R. (1999). An exact method for the vehicle routing problem with backhauls. Transportation Science, 33, 315–329. doi:10.1287/trsc.33.3.315 Mosheiov, G. (1994). The traveling salesman problem with pickup and delivery. European Journal of Operational Research, 79, 299–310. doi:10.1016/0377-2217(94)90360-3 Mosheiov, G. (1995). The pick-up and delivery location problem on networks. Networks, 26, 243–251. doi:10.1002/net.3230260408 Mosheiov, G. (1998). Vehicle routing with pick-up and delivery: tour-partitioning heuristics. Computers & Industrial Engineering, 34(3), 669–684. doi:10.1016/S0360-8352(97)00275-1
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
Nagy, G., & Salhi, S. (2003). Heuristic algorithms for single and multiple depot vehicle routing problems with pickups and deliveries. Working Paper no. 42, Canterbury Business School.
Pierskalla, W. P., & Roach, C. (1972). Optimal issuing policies for perishable inventories. Management Science, 18(11), 603–614. doi:10.1287/ mnsc.18.11.603
Nahmias, S. (1977). Comparison between two dynamic perishable inventory models. Operations Research, 25, 168–172. doi:10.1287/ opre.25.1.168
Prastacos, G. P. (1981). System analysis in regional blood management. In Mohr and Kluge, (pp. 110-131).
Nanry, W. P., & Barnes, J. W. (2000). Solving the pickup and delivery problem with time windows using reactive tabu search. Transportation Research Part B: Methodological, 34(2), 107–121. doi:10.1016/S0191-2615(99)00016-8 New, S. J. (1997). The scope of supply chain management research. Supply Chain Management, 2(1), 15–22. doi:10.1108/13598549710156321 New, S. J., & Payne, P. (1995). Research frameworks in logistics: three models, seven dinners and a survey. International Journal of Physical Distribution and Logistics Management, 25(10), 60–77. doi:10.1108/09600039510147663 Parragh, S. N., Doerner, K. F., & Hartl, R. F. (2007). A survey on pickup and delivery problems. Part I: Transportation between customers and depot, Journal für Betriebswirtschaft . Institut für Betriebswirtschaftslehre, 72, 1210. Pegels, C. C., Seagle, J. P., Cumming, P. D., & Kendall, K. E. (1975). A computer based interactive planning system for scheduling blood collections. Transfusion, 15(4), 381–386. doi:10.1046/j.15372995.1975.15476034565.x Perry, D. (1996, November-December). Analysis of a sampling control scheme for a perishable inventory system. Operations Research, 47(6), 966–973. doi:10.1287/opre.47.6.966
Prastacos, G. P. (1984). Blood inventory management: an overview of theory and practice. Management Science, 30, 777–800. doi:10.1287/ mnsc.30.7.777 Prastacos, G. P., & Brodheim, E. (1979). Computer based regional blood distribution. Computers & Operations Research, 6, 69–77. doi:10.1016/03050548(79)90018-2 Prastacos, G. P., & Brodheim, E. (1980). PBDS: A decision support system for regional blood management. Management Science, 26, 451–463. doi:10.1287/mnsc.26.5.451 Rabinowitz, M., & Valinsky, D. (1970). Hospital blood banking-An evaluation of inventory control policies. Technical Report, Mt. Sinai School of Medicine, City University of New York. Ropke, S. (2003). A local Search heuristic for the pickup and delivery problem with time windows. Technical paper. Copenhagen, Denmark: DIKU, University of Copenhagen. Ropke, S. (2005) ‘Heuristic and exact algorithms for vehicle routing problems’, Ph.D. thesis, Department of Computer Science at the University of Copenhagen (DIKU), Denmark. Ropke, S., & Pisinger, D. (2004). A Unified Heuristic for a Large Class of Vehicle Routing Problems with Backhauls. Technical Report no. 2004/14, ISSN: 0107-8283, University of Copenhagen, Denmark.
Pierskalla, W. P. (1974). Regionalization of blood bank services. Project supported by the National Center for Health Services Research, OAHS. DHHS.
1551
Meta-Heuristic Approach to Solve Mixed Vehicle Routing Problem
Ropke, S., & Pisinger, D. (2006). A unified heuristic for a large class of Vehicle Routing Problems with Backhauls. European Journal of Operational Research, 171(3), 750–775. doi:10.1016/j. ejor.2004.09.004
Tütüncü, G. Y., Carreto, C. A. C., & Baker, B. M. (2009). A visual interactive approach to classical and mixed vehicle routing problems with backhauls. Omega, 37(1), 138–154. doi:10.1016/j. omega.2006.11.001
Salhi, S., & Nagy, G. (1999). A cluster insertion heuristic for single and multiple depot vehicle routing problems with backhauling. The Journal of the Operational Research Society, 50, 1034–1042.
Wade, A., & Salhi, S. (2001). An ant system algorithm for the vehicle routing problem with backhauls. In MIC’2001 - 4th Metaheursistic International Conference.
Toth, P., & Vigo, D. (1997). An exact algorithm for the vehicle routing problem with backhauls. Transportation Science, 31, 372–285. doi:10.1287/ trsc.31.4.372
Wade, A., & Salhi, S. (2003). An Ant System Algorithm for the mixed Vehicle Routing Problem with Backhauls . In Metaheuristics: Computer Decision-Making (pp. 699–719). Amsterdam: Kluwer Academic Publishers.
Toth, P., & Vigo, D. (1999). A heuristic algorithm for the symmetric and asymmetric vehicle routing problem with backhauls. European Journal of Operational Research, 113, 528–543. doi:10.1016/ S0377-2217(98)00086-1
Wade, A. C., & Salhi, S. (2002). An investigation into a new class of vehicle routing problem with backhauls. Omega, 30, 479–487. doi:10.1016/ S0305-0483(02)00056-7
Toth, P., & Vigo, D. (2002a). An Overview of Vehicle Routing Problems . In Toth, P., & Vigo, D. (Eds.), The Vehicle Routing Problem (pp. 1–26). Philadelphia: SIAM Monographs on Discrete Mathematics and Applications.
Wassan, N. (2007). Reactive tabu adaptive memory programming search for the vehicle routing problem with backhauls. The Journal of the Operational Research Society, 58, 1630–1641. doi:10.1057/palgrave.jors.2602313
Toth, P., & Vigo, D. (2002b). VRP with backhauls . In Toth, P., & Vigo, D. (Eds.), The Vehicle Routing Problem (pp. 195–221). Philadelphia: SIAM Monographs on Discrete Mathematics and Applications.
Wassan, N. A., Nagy, G., & Ahmadi, S. (2008). A heuristic method for the vehicle routing problem with mixed deliveries and pickups. Journal of Scheduling, 11(2), 149–161. doi:10.1007/s10951008-0055-y
This work was previously published in Enterprise Information Systems and Implementing IT Infrastructures: Challenges and Issues, edited by S. Parthasarathy, pp. 210-225, copyright 2010 by Information Science Reference (an imprint of IGI Global).
1552
1553
Chapter 6.9
Achieving Supply Chain Management (SCM):
Customer Relationship Management (CRM) Synergy Through Information and Communication Technology (ICT) Infrastructure in Knowledge Economy Ashutosh Mohan Banaras Hindu University (BHU), India Shikha Lal Banaras Hindu University (BHU), India
ABSTRACT Information and communication technology infrastructure has changed modern business practice. The ever-changing information and communication technology infrastructure of organizations’ is opening new vista, which has not only bundles of opportunities to encash but also tremendous obstacles as survival threats. The concern about organizational competitiveness and development is closely linked to notions of the information sensitive society and global knowledge based economies. The business organizations under global knowledge economy can DOI: 10.4018/978-1-61520-625-4.ch020
emerge and grow rapidly by formulating and adopting the innovative business practices. Information’s impact is easily seen—it substitutes for inventory, speeds product design and delivery, drives process reengineering, and acts as a coordinating mechanism, helping different members of the supply chain work together effectively. While the potential of information sharing is widely promoted, relatively few companies have fully harnessed its capability to enhance competitive performance. The chapter tries to provide insight into how information and communication technology can be leveraged for supply chain value creation and make it possible to achieve synergy with customer relationship management.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Achieving Supply Chain Management (SCM)
INTRODUCTION The third wave of industrial transformation is more precisely known as the digital age and/or knowledge economy are terms used to describe a new era of significant changes, just like the second wave of turning an agricultural society to an industrial economy. The knowledge economy is fueled by the rampant development of Information and Communication Technology (ICT) infrastructure that not only facilitates data processing and information usage but also application of acquired knowledge. The advancements of information and communication technology turn into many new developments of scientific knowledge and discoveries began to take place such as new manufacturing or distribution technology, alternative energy, fuel source and biotechnology etc. Advancement in the digital arena not only includes integration of information technology and communication technology, but also virtual reality, artificial intelligence, robotics, and organic light-emitting diode etc., just to name a few. Knowledge management is one of the major driving forces of organizational change and value creation since the early 1990. As with any evolving managerial concept, knowledge management has ineradicably and increasingly become more and more complex. There is always possible convergence of related concepts that directly or indirectly connect with knowledge management, such topics include various business processes and concepts (such as supply chain management, customer relationship management etc.), intellectual capital, organizational learning and various learning theories, intangible assets, social network, neural network, market or competitive intelligence, competitive strategy, change management, corporate culture, creativity and innovation, information technologies (such as artificial intelligence application, decision support system, and expert system), and, not to forget the most important dimension of organizational performance management. Although the knowledge economy is still in its formative years, it promises
1554
to advance human understanding and knowledge in depth and widespread. Advancement of human knowledge may lead to improved problemsolving skills, decision-making skills, analytical, conceptual and strategic thinking skills, human intelligence in terms of Intelligence Quotient (IQ), Emotional Quotient (EQ) and Spiritual Quotient (SQ), inter-personal communication skills etc. While organizations continue to upgrade from their intensive data processing operations to information-based operations to knowledge-based business operations, which clearly depicts the need to understand knowledge management. The need to integrate major functions across the entire supply chain and not just within the enterprise would also begin to surface. The major functions within the supply chain include Supply Chain Management (SCM), Product Development Management (PDM), Enterprises Resource Planning (ERP), Customer Relationship Management (CRM) and Retail Network Management (RNM). For an enterprise that already manages its customers as its core business, customer relationship management is its core competency, and, most likely, also its competitive advantage. Customer relationship management system was born out of necessity, i.e., from sales transactions (such as call centers) and customer complaint handling. Only when marketing and customer orientation concepts began to take hold of the organization, customer relationship management begin to emphasize customer value added and customized services and customer relationship building. There are three levels of customer relationship management that progress in sophistication with each level up. Firstly, they are transaction-based data processing customer services; secondly, informed decision based customized services, and thirdly, knowledge-based customer and value driven relationship management. In order for an enterprise to achieve the third level of customer relationship management sophistication by upgrading its core competency and competitiveness, knowledge management must be introduced into
Achieving Supply Chain Management (SCM)
the organization as a strategic change. To accomplish that effect to its full potential, the key lies in the strategic intent of the enterprise must be explored. The enterprise must therefore identify the intermediate and ultimate goal of developing its sustainable competitive advantage and core competency. Here, we must accept that customer service is the focal point of the business strategy for the enterprise. In recent years, supply chain management has attracted renewed interest by the development of advanced information and communication technology technologies including world wide web/internet, e-commerce, Electronic Data Interchange (EDI), Supply Chain Operations Reference (SCOR) models, Enterprise Resource Planning (ERP) systems, Radio Frequency Identification (RFID) and mobile technology. The availability of timely information enables the various stages of supply chain participants to be better integrated/coordinated to implement a new, cost effective and dynamic supply chain operation so as to deal with small-lot, high-variety production and also to manufacture and supply customized items in single units at a cost effective and efficient manner (Lyon etal., 2006). The use of information and communication technology infrastructure can enable rapid research, access and retrieval of information to support collaboration and communication between supply chains collaborates (Wong, 2005). In a global environment, supply chain strategy needs a tight integration of the upstream supplier of parts, midstream manufacturer, assembler of components, and the downstream distributors of finished goods (Chen etal., 2003). Hence, there is an increased emphasis on electronic collaboration facilitated by an internet and web-based system. Such a system has enhanced cooperation, coordination of various decisions, sharing of resources, and has added value to products and improved partners’ profitability (Cheng etal., 2006). That is why, information and communication technology infrastructure of organization is raising the issue of achieving supply chain and customer relation-
ship synergy while addressing the challenge of being competitive at market place under global knowledge economy.
Concept of Global Knowledge Economy & Management Knowledge plays a major role in development of the system and takes place at all levels from the individual, through to the organization, on to interorganizational knowledge sharing, institutional and cross-institutional knowledge to the whole system – known as knowledge economy. As all human economic activity depends upon knowledge, so in trivial sense, all economies are ‘knowledge economies’. However, because knowledge cannot be possessed in the way, for example, gold can, it can be appropriated by anyone capable of using it and these are mainly a means for securing some economic return to invention rather than keeping knowledge confidential. Cooke (2002) identified three key issues related with knowledge economies as: •)>>
•)>>
•)>>
Knowledge ages and is superseded by new knowledge that ideally requires what Johnson (1992) calls ‘creative forgetting’, namely, the stowing away of redundant knowledge and the learning of new. The kind of knowledge that is frequently high value now a day is scientific including social scientific. So called ‘Scientific management’ was practiced at Ford plant in the first quarter of last century, ultimately proving fatal to craft-based production or ultimately to mass customization methods in car industry. This clearly indicates that economies of scope means variety could outweigh those of economies of scale, i.e., volume. Knowledge economies are not defined in terms of their use of scientific and technological knowledge, including their willingness to update knowledge and creative
1555
Achieving Supply Chain Management (SCM)
forget old knowledge through learning. Rather, knowledge economies are characterized by exploitation of new knowledge in order to create knowledge that is more new. For example, the discovery of the genetic code structure allows sampling or the recombination of DNA to produce therapeutic products for healthcare or food product application, while the decoding of the human genome both creates opportunities for value creation and opens up the need to discover the biochemistry of proteins, giving rise to the new knowledge field of proteomics. To the extent genomics and proteomics give rise to superior tests or drugs to those presently available at comparable cost, knowledge is acting on knowledge itself to enhance productivity. According to Archibugi etal. (1999), uncertainty, unequal access to information, dynamic economies of scale and other difficulties which economists normally present as a phenomenon of marginal importance are gradually becoming the rule rather than the exceptions under the phenomenon of global knowledge economy. Underlying the move towards an increasingly knowledge economy are giving rise to two phenomenons as discussed by Lundvall (1999) as: •)>>
•)>>
1556
It reflects intensified and increasingly global competition, which makes it more difficult for firms in high-income countries to survive simply by producing traditional products with old-fashioned strategies and processes by using semi-skilled or unskilled workers. The second phenomenon is related with dramatic advances in information and communication technology through which drastically reducing the price of data and simple information. At the same time, as its diffusion gives rise to an increase in demand for new skills and qualifications as well as formulation of strategies and
realignment or restructuring of various functions and processes of the organization. Introduction of knowledge management into an organization is a key strategic issue and thus requires a proper strategic plan. The plan should involve not only the supply chain management and customer relationship management function but also other supporting functions and processes. Adopting knowledge management also requires a strategic review of the organization’s business model to deduce its effects and challenges of implementation. Viewing knowledge management just on an operational level would not be sufficient as it involves issues of change management, corporate culture, effective leadership, and competency development, all of which would have major impact on the business model and competitiveness of the organization. The objectives of knowledge management are: •)>>
•)>> •)>>
to avoid re-inventing of the wheel in organizations or reduce duplication of knowledge-based activities, which basically depicts the intent of full knowledge utilization to facilitate continuous innovation that can be capitalized and to increase people competencies and thus organizational competencies that would eventually lead to greater competitiveness.
Knowledge management program can facilitate knowledge transfer or knowledge flow within, as well as between supply chain partners. Continuous knowledge creation and application on product and service development and/or operational process development creates greater value and competitiveness whether they are introduced as incremental creativity or disruptive creativity. New knowledge created, shared, and applied within the business operations would also build organizational intellectual capital, i.e., human capital, structural capital, and relationship capital
Achieving Supply Chain Management (SCM)
through the development of competencies of the people in the organization. Human resource management would play a major role in competency development such as job rotation, job enrichment, e-training/e-learning, cross-functional projects, experimentation, leadership development, and team building activities etc. Collective individual competencies can be organized and converted into structural capital and relationship capital creating greater financial value and competitiveness. Due to the strategic nature of the relationships, organization must have a performance monitoring system to ensure continuous improvements throughout the supply chain.
Evolution of Supply Chain Management and Customer Relationship Management In last 10 years, the term Supply Chain Management (SCM) got the prominence (Cooper, Ellram, Gardner & Hanks, 1997). This concept is getting popularity with fast pace due to trends in global sourcing, an emphasis on time, quality and cost based competition and their cumulative contribution towards greater environmental uncertainty. Some authors treat supply chain management in operational terms involving the flow of materials and products (Tyndall etal., 1998), others view it as a management philosophy (Ellram & Cooper, 1990) and still other view, it in terms of a management process (LaLonde, 1997). Supply chain management has evolved through different phases of management practices. The earliest part of this was traced with Materials Management function after 1850 with the introduction and development of railroads. Traditionally, materials management was responsible for various aspects related to material flow within an organization. The next period was witness of Physical Distribution Management. At that time, it was playing a key role in creating and maintaining brand loyalty and market share of company. The real difference between the materials management era and physical distribution
era was that under later phase the importance of transportation, as well as packaging increased significantly so that finished goods are delivered to customers without any damage in transit. During World War – II, problems involving the movement of huge quantities of supply, made the operations of logistics as a distinct technical field for smooth operations of firm. Even after World War – II, organizations could not enjoy the smooth flow of resources with their physical distribution system and it forced them to adopt cost saving practices with full utilization of resources. These developments gave rise to Logistics management. Therefore, logistics management involves forecasting the future requirements, arranging and managing the flow of raw materials, components, manufactured parts and packaged products. Rapid spread of information with information and communication technology enabled services gave rise to concept of unified world as a global village. Under these circumstances, organizations could not encash the opportunities existing throughout the world by its own efforts. They need partners to pool the efforts and expertise of others to derive the benefits of the same. According to Sardana and Sahay (1999), supply chain management extends the scope to link external partners like suppliers, vendors, distributors and customers and advocates managing relationship, information and material flow across enterprises borders. Further, Galasso and Thierry (2008) pointed that supply chain management emphasizes the necessity to establish cooperative processes that rationalize or integrate the forecasting and management of demand, reconcile the order, book processes and mitigate risks. In the new business era, supply chain management is considered as a medium for achieving short-term economic benefits and gaining long-term competitive advantages. Supply chain management can be considered as an aggregation of approaches and efforts supporting the efficient consolidation of producers, suppliers and distributors, in effect a co-ordination of the value chain
1557
Achieving Supply Chain Management (SCM)
so that products are produced and distributed in the right quantity, having the right quality, at the right rime and at the right place ultimately to achieve consumer satisfaction. Current advances in information and communication technology (ICT) have revolutionized supply chain management to make it a mechanism that enables diverse and geographically disperse companies to create alliances to meet a new form of Internet-oriented consumer demand. These alliances represent advanced and dynamically changing networks that aim to become competitive by focusing their resources on bringing elements of e-business to specific market segments. In other words, the focus of supply chain management has shifted from the engineering and improvement of individual functional processes to the co-ordination of the activities of a dynamic supply chain network. The development of marketing as a field of study and practice is undergoing a re-conceptualization in its orientation from transaction to relationship (Webster, 1992). The emphasis on relationship as opposed to transaction-based exchanges is very likely to redefine the domain of marketing (Sheth, Gardner & Garrett, 1988). Relationship marketing attempts to involve and integrate customers, suppliers and other infra structural partners into a firm’s developmental and marketing activities (Mckenna, 1991: Shani & Chalasani, 1991). Such involvement results in close interactive relationship with suppliers, customer or other value – chain partners of the firm that is why interactive relationship between marketing actors are inherently compared with the arm’s length relationship implied under the transactional orientation (Parvatiyar, Sheth & Whittington, 1992). The first customer relationship management development stage is a non-information technology assisted stage. Organizations belonging to this stage have a very limited or not at all use of information technology as far as the management of customer relationships is concerned. However, the organizations use customer related knowledge
1558
management instruments and have some type of a mostly manual customer satisfaction and customer complain data, which clearly indicate organizations’ positive attitude and orientation towards defensive relationship marketing. The second customer relationship management development stage is the information technology assisted customer relationship management, predominately a manual process that uses information technology to enhance the organization-customer relationship and analyze customer-related data. Customer data is primarily collected manually but recorded and analyzed by information technology tools and techniques such as spreadsheets, database systems and statistical packages. Organizations belonging to same stage are expected even today to have some Internet presence and manage customer satisfaction and complaint behavior optimally. The third customer relationship management development stage is the information and communication technology-automated customer relationship management, which emphasizes customer interaction by using a number of technologies, such as the Internet and telephone/computer integration. Acquisition of customer profiles, tracking of customer purchase patterns and trends and interactive service provisions have been made possible by the advances in information and communication technology. Organizations belonging to third stage are having active Web presence, engage in e-commerce and have implemented Enterprises Resource Planning (ERP) and operational customer relationship management systems aimed at business processes optimization and sales force automation. Processing of customer requests, orders, and management of customer accounts are expected to be timely and accurate and generally at a high level of efficiency. The fourth customer relationship management development stage is the integrated customer relationship management, leading to customer personalization and high level of service and customer satisfaction. Fourth stage, companies employ sophisticated customer relationship man-
Achieving Supply Chain Management (SCM)
agement information systems providing highly integrated back-office, front office and Internet functions. These integrated customer relationship management software systems should be flexible enough to adapt to changing customer needs over the product’s life cycle and analytical in order to dynamically monitor consumer preferences. Personalization software includes a number of analysis technologies such as data warehousing and mining, collaborative filtering and rules engines. Supply chain optimization and analytic functions are also expected at this stage using Web-enabled decision support software systems. The fourth stage will essentially require sharing of information not only across the supply chain partners so that customer-centric knowledge could be available to every decision maker inside the organization and partners of the organization but also sharing of relevant business information with customers and business partners aiming at overall customer satisfaction, processes efficiency and cost minimization.
EVOLUTION OF INFORMATION AND COMMUNICATION TECHNOLOGY (ICT) TOOLS AND TECHNIQUES IN GLOBAL KNOWLEDGE ORGANIZATIONS As discussed earlier information plays a crucial and dominant role under global knowledge economy, that is why, it serves as the glue between the supply chain and customer relationship and facilitate the other processes, departments and organizations to work together so as to build an integrated, coordinated functioning of an organization. Information and Communication Technology (ICT) tools and techniques consist of the hardware and software used throughout the organization to gather and analyze information. Over a period, information and communication technology tools and techniques have evolved from just a support function to an essential tool of decision-making
process, which can be categorized in five phases shown in Figure 1.
Phase – I Under the first phase of evolution of information and communication technology tools and techniques, it was used to automate routine, repetitive and operational functions. The purpose of information and communication technology was to replace clerical system so that organization could get benefits of clerical and administrative savings of time and cost.
Phase – II Under this phase, information and communication technology tools and techniques became a little more sophisticated and focus was shifted towards the efficient and effective use of assets to enhance profitability. Some of the processes of organization, which were smoothened with the use of information and communication technology, were cash management, sales analysis, resource scheduling, inventory management etc. The first two phases of information and communication technology evolution were targeted towards cost reduction and productivity enhancement exercises. Second phase helped organizations to integrate and coordinate various functions of the same. These efforts moved organizations towards the price war at marketplace.
Phase – III This phase witnessed the development of various communication networks and easy availability of technology at reduced price. These developments helped organizations to work more closely as a single entity and re–define / re–design the business processes to enhance productivity with profitability. Development of communication networks made easy the task of data storage and its retrieval. Some of the popular applications were
1559
Achieving Supply Chain Management (SCM)
Figure 1. Evolution of IT in organization
Just In Time (JIT), Enterprises Resource Planning (ERP), On-line shopping, payment systems through credit cards etc.
Phase – IV This phase of information and communication technology evolution is witnessing the out-of-box thinking of an organization. Information and communication technology tools and techniques are helping organizations to be connected with their suppliers, distributors, customers, i.e., inter–firm coordination and integration of the same. With the flow of real – time information, information and communication technology gives a big boost to the real – time decision making with higher level of accuracy.
1560
The last two phases of information and communication technology evolution helped organizations to focus on their core – competency and pooling the strengths of other to add value in its offerings. This exercise leads to competition at market place based on value addition for customers.
Phase – V The customer expectations are ever increasing. This new paradigms will throw challenges for an organization. Technology will ensure the automation of all routine task of organization. Intelligent human brains will be free to focus on strategies to serve customer better and better, i.e., transition of physical worker into knowledge worker. The
Achieving Supply Chain Management (SCM)
new organizational structure will be much more fluid, which will lead towards virtual organization concept. It means the rigid hierarchies of today will give way to virtual teams formed by individuals with complementary skills and get connected through IT innovations to deliver higher and higher values to customer.
NEED FOR SUPPLY CHAIN MANAGEMENT AND CUSTOMER RELATIONSHIP MANAGEMENT SYNERGY The shift in decision criteria coupled with heightened awareness of the customer under global knowledge economy will throw new challenges towards organization. Brown and Gulycz (2002) identify the reasons associated with difficulty to manage customer relationship profitably: •)>>
•)>>
•)>> •)>>
•)>>
Increasingly informed customers have more choice and are less loyal to their suppliers. New distribution channels and communication media mean that the customer interaction mix is more complex, difficult to integrate and potentially expensive. Delivery channels are increasingly complex. Numerous powerful technology enablers are now available but are expensive to implement, and historic returns are uncertain. Marketplaces and exchanges threaten to bring manufacturer closer to their customer, i.e., process of dis-intermediation.
Consumer likes to move towards those organizations, which have been able to decrease response and delivery lead times and costs associated with it. The changes in expectation of consumers forced a shift in the attitude and working of organization and its managers, who now accept the need for close integration and partnership with
complementary entities and most importantly with customers. Under the new global knowledge economy age, Indian market is forcing companies to segment customers not only on traditional lines of demography and behavioral aspects but also based on decision criteria. Some new segment parameters could be product and brand awareness, risk-taking propensity, value awareness and its delivery process, technology sensitivity, information orientation and relationship efforts. Stevens (1989) stated that the objective of managing the supply chain is to synchronize the requirements of customers with the flow of materials from suppliers in order to affect a balance between what are often seen as conflicting goals of high customer service, low inventory management and low unit cost. Ellram & Cooper (1990) define supply chain management as an integrative philosophy to manage the total flow of a distribution channel from supplier to ultimate user. Whereas Sheth and Parvatiyar (2000) defined customer relationship management as the ongoing process of engaging in cooperative and collaborative activities and programmes with immediate and end-user customers to create or enhance the mutual economic value at reduced cost. Gartner group, a reputed research organization defines customer relationship management as follows: It is a business strategy, the outcomes of which optimize profitability, revenue and customer satisfaction by organizing around customer segments, fostering customer-satisfying behaviors and implementing customer centric process. The objective of customer relationship management is to build long-term relationship for retaining the customers and prospective customers, whereas, supply chain management tries to improve the efficiency of procurement, production and distribution process by taking a holistic view of entire chain’s operations across internal and external customers. The ultimate aim of both is to generate value for customers as well as organization. If a customer perceives positive value with the use of product/service then it means organiza-
1561
Achieving Supply Chain Management (SCM)
tion will get benefits in terms of cost advantages by retaining old customers, easy adoption of new products, word-of-mouth (WoM) by customers etc. With the increase importance of long-term relationships, trust, commitment, cooperation and information sharing in supply chain management, the management of customer interface can greatly affect customer satisfaction and loyalty. As a result, it is likely to influence the levels of trust and openness for information exchange process, which is crucial for supply chain processes and greatly facilitated by the tools and techniques of information technology. Supply chain management is emerging into consumer driven value chain management, which in addition to pursuing efficiency improvements, recognizes the importance of consumer needs and attempts to capture the subtleties of consumer value as source of differentiation and supply chain competitiveness (Christopher, 2005). According to Hau L. Lee (2001), managing demand for the total value maximization of the enterprises and the supply chains is the “competitive battleground” of the twenty-first century. The mentioned studies are revealing the importance of customer driven supply chain management to gain competitive advantage. Similarly, Langerak and Verhoef (2003) highlighted the importance of customer relationship management concept as the customer relationship management concept is embedded in organization’s strategy so that executives will be able to take into account the complexities that exist in business environment and introduce policies that allow organizations to co-evolve, i.e., either the organization is evolving alongside the competitor organizations or in tandem with the partner organizations. The literature related with the proposed research is limited. The available studies are primarily concentrating on multifaceted relationship issues in supply chain, i.e., buyer – supplier relationships in various organizational sectors. Few studies are also concentrating towards analyzing the impact of supply chain relationship
1562
on organizational performance matrices. Vickery etal. (2003) indicated that supply chain integration incorporating both ‘supplier partnering’ and ‘closer customer relationships’ was directly related to customer service, and that the relationship between supply chain integration and financial performance was mediated by customer service. The advent of supply chain collaboration creates the need, at the intercompany level, to pay special attention to the understanding of collaboration in order to prepare chain members to create collaborative efforts successfully (Lambert etal., 2004). There are few indicative studies highlighted the importance of supply chain management and customer relationship management synergy as J. Woods etal. (2002) under Gartner Research emphasized that if organizations pursue supply chain management and customer relationship management separately, it can result in missed opportunities and poor performance. According to McCluskey etal. (2006), a customer relationship does not end with delivery of the product or service because that is just the beginning. Further, they introduced the concept of Service Life Cycle Management, which is the process of combining supply chain management, customer relationship management and product life cycle management to meet individual customer’s ongoing needs after the initial sales and delivery. Based on the findings of an extensive global study of nearly 250 major consumer businesses and their executive team in 28 countries, the Deloitte Research (2006) concludes that ‘consumer businesses that have integrated their customer relationship management and supply chain management capabilities have dramatically and measurably outperformed their competition in virtually every critical financial and operating category i.e. two to five times more likely to achieve superior performance in sales, market share, customer service and other key measures.’ The applicability of supply chain management and customer relationship management synergy in retail sector is analyzed by Tierney (2003) for
Achieving Supply Chain Management (SCM)
the success of 7-Eleven, Japan, whose stock prices kept rising despite Japan’s recession for the past ten years due to its demand led management, which led it to intensify sales patterns and customer preferences and to match those by re-engineering its category management and store product layouts, resulting in increased sales and profits. The concept of supply chain management, customer relationship management and synergy of both are catching the attention of Indian organizations. The applicability of supply chain management and customer relationship management synergy in Indian organization is still in fancy stage that is why the availability of supply chain management and customer relationship management synergy literature in Indian context is limited or missing.
SUPPLY CHAIN MANAGEMENT: CUSTOMER RELATIONSHIP MANAGEMENT SYNERGY IN KNOWLEDGE ECONOMY According to Harry etal. (2007), knowledge can contribute substantially to an intangible ‘strategic resource’ in the supply chain and customer relationship synergy in the knowledge economy. Information is used when making decisions about inventory, transportation and facilities within a supply chain as well as in formulating and implementing the strategies for customer service in global knowledge economic organizations. Stone, Woodcock and Wilson (1996) suggest that in future, customers will increasingly seek to manage the relationship themselves; using new technologies and those companies need to prepare themselves for this world. Sisodia and Wolfe (2000) identify that the use of technology changes both scale and scope economies of relationship marketing. Even though, it is possible to perform customer relationship management at small scale without using the information and communication technology. However, informa-
tion and communication technology becomes essential for performing customer relationship management practices on large-scale i.e. global economy. Organizations can serve large number of customers with small transaction volumes of the same. Besides the scale economies, information and communication technology has enabled information and communication technology tools and techniques. Scale and scope economic of relationship marketing is not confounded with customers only. This concept helps the relationship marketing efforts with outside partners of organizations e.g. suppliers, vendors etc. For example, the Internet/intranet can facilitate connections between the focal firms, suppliers, dealers and customers. Sisodia and Wolfe (2000) identify the drivers of technology-enabled relationship marketing as shown in figure 2. In broad terms, the intelligent use of information technology in relationship marketing will lead to a close customer focus with respect for customer’s time and intelligence. Under Indian context, Marico Industries rollout its Midas – MI Net supply chain initiative with three basic objectives – to ensure real – time connectivity with distributors, to create a control room for management to focus on sales productivity and to function as a kind of virtual sales office for the field force. Information and communication technology systems facilitate the functioning of customer relationship management up to great extent, nevertheless, information and communication technology systems play a pre–dominant role in every stage of the supply chain as inventory, warehousing, transportation, manufacturing, suppliers and distributors management etc., by enabling companies to gather, analyze and extract information. A growing body of research has focused on the relationship between information and communication technology and supply chain management. Experts have sought to emphasize the critical role of information and communication technology
1563
Achieving Supply Chain Management (SCM)
Figure 2. Drivers of technology enabled relationship marketing (source: Sisodia & Wolfe, 2000)
in supply chain management application (Andel, 1997, Barnes, 1997), explore different kinds of information and communication technology applications in supply chain management and have documented successful implementation of information and communication technology in the supply chain management context (Wenston,1997). Evolution of information and communication technology applications in the supply chain started with the Legacy systems, which were characterized as narrow in scope, operational in nature and worked as stand-alone functions. Later on, Enterprises Resource Planning (ERP) system came into limelight. This operational information and communication technology system gathers information from across all of a company’s functions. These systems are good at monitoring transactions but lack the analytical capabilities to determine what transactions ought to happen. At present, analytical systems are gaining an edge over previous systems. Analytical systems are more focused on planning and decision making activities. They analyze the information supplied to them by legacy or ERP system so that to take correct decisions. For example: ERP system may provide demand history, inventory levels and supplier lead times, on the basis of these information, 1564
analytical application will determine the profitable level of inventory. Some of the analytical application systems are Advance Planning & Scheduling (APS), Transportation Planning System (TPS), Demand Management System (DMS), Customer Relationship Management (CRM), Sales Force Automation (SFA), Inventory Management System (IMS), Manufacturing Execution System (MES), Warehouse Management System (WMS), and Supply Chain Management (SCM) etc. Implementation of information and communication technology driven supply chain management initiative brings multiple process changes in both suppliers and customers. Changed processes lead to integrated information, coordinated workflow and synchronized planning. Information and communication technology driven supply chain management initiatives bring in coordinated workflow, which simplifies the complexity of procurement, order processing and financial flow. Besides, integrated processes among supply chain partners have a positive impact on product development and ability to deal with volatile demand resulting from frequent changes in competition, technology and regulations. Coordination in business processes among supply chain partners improves the buyer-supplier relation, as changed processes
Achieving Supply Chain Management (SCM)
will bring reduced variability and uncertainty in information possessed by both parties. Extensive inter-organizational information sharing trim downs information asymmetry and likelihood of opportunistic moves at the expense of other party such as Electronic Data Interchange (EDI) works as inter-organizational system allowing business partners to exchange structured business information. Electronic data interchange has been reported to yield reduction in transaction cost, higher information quality, increased operational efficiency, better customer service and improved inter-firm relations. The impact of information and communication technology tools and techniques on the management of relationships in the supply chain consists in measuring the impact of the deployment of inter-organizational information systems over different variables such as Quality of Relations:Larson and Kulchitsky (2000) demonstrated that while EDI increases the integration across supply chain, it does not affect the feeling of cooperation between the parties. Whipple and Daugherty (2002) confirm that the deployment of EDI or other IT tools is not a guarantee of integration but that integration depends on the quality of information exchanged. It is also the sum of the information provided and shared by each of the partners that improves the satisfaction perceived by alliance. The quality of relations depends on the accuracy of the information exchanged among purchasers, suppliers, focal firm, customers etc. at the right time. Cooperation and Coordination:Bowersox and Daugherty (1995) believe that the development of broader external relations is a tendency resulting from opportunities provided by information technology in coordinating inter-organizational activities. IT tools and techniques favour cooperative relations although collaboration is first based on human interactions that IT may support but cannot replace. Commitment – Trust: The use of EDI contributes to the improvement of purchaser-vendor
relations through the commitment developed by each party when system is implemented. A greater frequency in electronic communication and exchanges of information is associated with greater commitment. Trust between parties increases thereafter through the sharing of information and the reduction of poorly processed orders. Balance of Power and Control within the Supply Chain: The presence of information and communication technology tools and techniques may change the balance of power between the chain partners, in particular by making the information and product flow independent. While product follow a traditional path as manufacturer – intermediary – retailer, EDI allows for a flat information structure, in which the manufacturer is the single coordination point within the entire chain. Researches indicate that technologies first benefit the leader of the supply chain i.e. focal firm, establishing or reinforcing its dominance over its partners. The use of EDI technology may reduce the negotiating power of a distributor by allowing the manufacturer to collect more information and therefore achieve greater flexibility. The efficient and effective knowledge management can yield benefits, which include supply chain planning (Apostolou et al, 1999; Chandra et al, 2001), improved buyer and supp1ier relationships (Bates and Slack, 1998; Beecham and Cordey-Hayes, 1998), collaboration/coordination (Lin et al, 2002), improved supplier sourcing and new product development (Hansen, 2002; Vries and Brijder, 2000; Griffin and Hauser, 1996), smoother project development (Kumaraswamy etal., 2005), improved manufacturing process design (Samaddar and Kadiyala, 2006), better tender selection and greater customer satisfaction (Gilsby and Holden, 2005). Christensen et al. (2005) examine the relationship between knowledge management strategy and the application of down-stream oriented supply chain and knowledge related to customers. Further, the research work of Christensen et al. (2005) provides evidence that organizations must increase
1565
Achieving Supply Chain Management (SCM)
their capability of managing the supply chain and customer knowledge resources in order to succeed in a knowledge economy. Few cases studies of commercial organizations that are applying knowledge management initiatives in the areas of supply chain management and customer relationship management are summarized below: 1. )>> IBM, the world’s largest IT providers, is doing business in over 140 countries. To enable business partners to provide excellent service with IBM, software, namely Partnerlnfo, has been developed. It provides business partners with the information and knowledge they need to service their customers. Business partners can beneficially use more than a million documents and thousands of applications. Partnerlnfo provides business partners with a single pathway to vital information, business programs and shared projects with IBM and other business partners. Through using the Partnerlnfo, business partners can access the most up-to-date and accurate information and knowledge. Knowledge facilitates the product development stage and shortens the time spent for management communication. By doing this, product life cycles are significantly reduced thereby meeting the fast changing demand requirements of customers (Vries and Brijder, 2000). 2. )>> Toyota’s consulting team shares valuable knowledge with suppliers, helps make changes, and allows the value of the benefits to remain with the supplier for some time before Toyota begins sharing them in the form of lower prices for components (Dyer and Nobeoka, 2000). 3. )>> General Motors (GM) consultants walk into supplier plants, analyze what changes are needed in operations, recommend the changes to the plant, and then immediately ask for a price decrease, leaving the plant to figure out how to implement the change in a timely manner (Dyer and Nobeoka, 2000).
1566
4. )>> Dell coordinates information sharing, incentive alignment and collective learning to focus on direct selling and build-to-order. Direct selling allows Dell to have a better understanding about customer needs and wants. This knowledge can be used to improve the accuracy of demand forecasts. The sharing of this knowledge about buildto-order, visibility of demand information, inventory speed, and supplier and customer relationships results in Dell and its suppliers reaping mutual benefits, such as customer satisfaction, profitability and high-market shares (Govindarajan and Gupat, 2001; Holweg and Pil, 2004).
CREATING VALUE THROUGH SUPPLY CHAIN MANAGEMENT– CUSTOMER RELATIONSHIP MANAGEMENT SYNERGY IN KNOWLEDGE ECONOMY Higher levels of responsiveness to the changes in customer demands, a cost effective production scheme for a small volume of product, as well as fast and reliable distribution methods are the key success factors for the organization under global knowledge economy. To achieve the same, multiple independent supply chain members may take joint decisions on production and logistics for large parts of their collective supply chain work (Akkermans et al, 2004). It requires both information and knowledge flow for supporting decision-making (Choi and Hong, 2002). With the knowledge flow, there is also knowledge creation (e.g. new product development, innovative process improvements, technology development) and knowledge transfer between organizations (Samaddar and Kadiyala, 2006). Supply chain management and customer relationship management under knowledge economy is becoming an integral part of every organization whether large or small ones. The benefits derived by opting in-
Achieving Supply Chain Management (SCM)
formation and communication technology driven supply chain management – customer relationship management synergy under global knowledge economy can be categorized as:
Value for Organization The job of strategic planning and scheduling and its implementation becomes easy for organizations under information technology driven supply chain management and customer relationship management synergy as: •)>> •)>> •)>> •)>> •)>> •)>> •)>>
•)>>
•)>> •)>> •)>>
•)>>
Improved ability to produce plans that meets management objectives. Significant cost reduction in a short period. Improved cash flow or quick Return on Investment. Increased productivity and asset utilization. Better capacity planning, which results as increased productivity. Increased on time procurement and delivery, which in turn enhance the reliability. Technology enabled supply chain management and customer relationship management synergy will lead to higher levels of marketing effectiveness in terms of anticipation of customer’s need, increased level of automated transactions and ability to time effective distribution along with advertising and promos. Increased competitiveness and enhanced reputation of organization due to adherence with Available-to-Promise (ATP). Better communication and coordination within and outside the organization. Improved Customer/distributor and vendor relationships. Wired and Wireless technologies efforts will give rise to virtual office for sales force, which enable them to direct access to product and company information. Overall increased profits and higher shareholder value.
Value for Customers Entire philosophy information technology driven supply chain management and customer relationship management synergy revolves around the customer, so customers will get the maximum benefits whenever organizations opt for the same as: •)>>
•)>> •)>> •)>> •)>>
Easy retrieval and dissemination of product/service and organization related information i.e. easy to place the customized orders. Able to track the progress of order. Availing the product/service according to decided available-to-promise norms. Achieving ‘value’ for money spent. Quick trouble shooting through timely feedback and corrective actions.
CONCLUSION The information and communication technology driven supply chain management and customer relationship management synergy should feature such a degree of organization that no other competitors is able to provide the same services more flexible, faster and efficiently under ever changing global knowledge economy. Based on the literature review, case studies and theory development, it is concluded that information and communication technology infrastructure development is really worthwhile and really successful investment that enables organizations in achieving supply chain and customer relationship management synergy. An ideal synergy of supply chain management and customer relationship management backed by information and communication technology infrastructure gives an organization’s employees, suppliers, customers and other stakeholder easy access to the information they need to do their job effectively, ability to analyze and easily share the information with others in knowledge economy. It facilitates scrutinizing every aspect of organization’s operations to find new revenue or squeeze
1567
Achieving Supply Chain Management (SCM)
out additional cost savings by supplying decision support information. Therefore, only those organizations, which opted for planned information and communication infrastructure, will garner net optimum profits by having the required resources available at the lowest price and on time, i.e., not more – that would be wasteful or not less – that would delay processes. Either of these two is undesired under global knowledge economy.
REFERENCES Akkermans, H., Bogerd, P., & Doremalen, J. V. (2004). Travail, Transparency and Trust: A Case Study of Computer Supported Collaborative Supply Chain Planning in High-tech Electronics. European Journal of Operational Research, 153(2), 445–456. doi:10.1016/S0377-2217(03)00164-4 Andel, T. (1997). Information Supply Chain: Set and Get Your Goal. Journal of transportation and Distribution, 8(2). Apostolou, D., Sakkas, N., & Mentzae, G. (1999). Knowledge Networking in Supply Chains: A Case Study in the Wood/Furniture Sector. Information-Knowledge-System Management, 1(3/4), 267–281. Archibugi, D., Howells, J., & Michie, J. (1999). Innovation Policy in a Global Economy. Cambridge, UK: Cambridge University Press. doi:10.1017/ CBO9780511599088 Barnes, C. R. (1997). Lowering Cost through Distribution network Planning. Industrial Management (Des Plaines), 39(5). Bates, H., & Slack, N. (1998). What Happen When the Supply Chain Manages You? A Knowledgebased Response. European Journal of Purchasing & Supply Management, 4(1), 63–72. doi:10.1016/ S0969-7012(98)00008-2
1568
Beecham, M. A., & Cordey-Hayes, M. (1998). Partnering and Knowledge Transfer in the UK Motor Industry. Technovation, 18(3), 191–205. doi:10.1016/S0166-4972(97)00113-2 Bowersox, D. J., & Daugherty, P. J. (1995). Logistics Paradigms: The Impact of Information Technology. Journal of Business Logistics, 16(1), 65–81. Brown, S. A., & Gulycz, M. (2002). Performance Driven CRM (pp. 7–23). New York: John Wiley & Sons. Chandra, C., Kumar, S., & Smirnov, A. V. (2001). E-management of Scalable Supply Chains: Conceptual Modeling and Information Technologies Framework. Human Systems Management, 20(2), 83–94. Chen, R. S., Lu, K. Y., Yu, S. C., Tzeng, H. W., & Chang, C. C. (2003). A Case Study in the Design of BTO Shop Floor Control System. Information & Management, 41(1), 25–37. doi:10.1016/ S0378-7206(03)00003-X Cheng, E. W. L., Love, P. E. T., Standing, C., & Gharavi, H. (2006). Intention to e-Collaborate: Approach Propagation of Research Propositions. Industrial Management & Data Systems, 106(1), 139–152. doi:10.1108/02635570610641031 Choi, T. Y., & Hong, V. (2002). Unveiling the Structure of Supply Networks: Case Studies in Honda, Acura and Daimler-Chrysler. Journal of Operations Management, 20(5), 4439–4493. doi:10.1016/S0272-6963(02)00025-6 Christensen, W. J., Germain, R., & Birou, L. (2005). Build. to-order and Just-in-time as Predictors of Applied Supply Chain Knowledge and Market Performance. Journal of Operations Management, 23(5), 470–481. doi:10.1016/j. jom.2004.10.007
Achieving Supply Chain Management (SCM)
Christopher, M. (2005). Logistics and Supply Chain Management: Creating Value Adding Networks. Upper Saddle River, NJ: Pearson publications.
Griffin, A., & Hauscr, J. (1996). Integrating R&D and Marketing: A Review and Analysis of the Literature. Journal of Product Innovation Management, 13(1), 137–151.
Cooke, P. (2002). Knowledge Economies: Clusters, Learning and Cooperative Advantage. New York: Routledge.
Hansen, M. T. (2002). Knowledge Networks: Explaining Effective Sharing in Multiunit Companies. Organization Science, 13(3), 232–249. doi:10.1287/orsc.13.3.232.2771
Cooper, M. C., Ellram, L. M., Gardner, J. T., & Hanks, A. M. (1997). Meshing Multiple Alliances. Journal of Business Logistics, 18(1), 67–89. Deloitte Research. (2006). Consumer Business Digital Loyalty Networks: Increasing Shareholder’s Value through Customer Loyalty and Network Efficiency. Deloitte Research, 1. Dyer, J. H., & Nobeoka, K. (2000). Creating and Managing a High-Performance KnowledgeSharing Network: The Toyota Case. Strategic Management, 21(3), 345–367. doi:10.1002/ (SICI)1097-0266(200003)21:33.0.CO;2-N Ellram, L. M., & Cooper, M. C. (1990). Supply Chain Management Partnership and the Shipper – Third Party Relationship. International Journal of Logistics Management, 4(1), 1–12. doi:10.1108/09574099310804911 Galasso, F. & Thierry, C. (2008). Design of Cooperative Processes in a Customer – Supplier Relationship: An Approach Based on Simulation and Decision. Engineering Application of Artificial Intelligence. doi: 10.1016/j.engappai.2008.10.008 Gilsby, M., & Holden, N. (2005). Apply Knowledge Management Concepts to the Supply Chain: How a Danish Firm Achieved a Remarkable Breakthrough in Japan. The Academy of Management Executive, 19(2), 85–89. Govindarajan, V., & Gupat, A. K. (2001). Strategic Innovation: A Conceptual Road-map. Business Horizons, 44(4), 3–12. doi:10.1016/S00076813(01)80041-0
Harry, K.H., & Chow, K.L., Choy, & Lee, W.B. (2007). Knowledge Management Approach in Build-to-order Supply Chains. Industrial Management & Data Systems, 107(6), 882–919. doi:10.1108/02635570710758770 Holweg, M., & Pil, F. K. (2004). The Second Centumy — Reconnecting Customer and Value Chain Through Built-to-Order. Cambridge, MA: MIT Press. Johnson, B. (1992). Institutional Learning . In Lundvall, B. (Ed.), National Systems of Innovation: Towards a Theory of Innovation and Interactive Learning. London: Pinter. Kumaraswamy, M. M., Palaneeswaran, E., Rahman, M. M., Ugwu, O., & Ng, T. S. (2005). Synergising R&D Initiatives for Enhancing Management Support Systems. Automation in Construction, 15(6), 681–692. doi:10.1016/j. autcon.2005.10.001 La Londe, B. J. (1997). Supply Chain Management: Myth or Reality? Supply Chain Management Review, 1(Spring), 6–7. Lambert, D. M., Knemeyer, A. M., & Gardner, J. T. (2004). Supply Chain Partnerships: Model Validation and Implementation. Journal of Business Logistics, 25(2), 21–42. Langerak, F., & Verhoef, P. C. (2003). Strategically Embedding CRM. Business Strategy Review, 14(4), 73–80. doi:10.1111/j..2003.00289.x
1569
Achieving Supply Chain Management (SCM)
Larson, P. D., & Kulchitsky, J. D. (2000). The Use and Impact of Communication in Purchasing and Supply Management. Journal of Supply Chain Management, 36(3), 29–39. doi:10.1111/j.1745493X.2000.tb00249.x
Samaddar, S., & Kadiyala, S. S. (2006). An Analysis of Inter-organizational Resource Sharing Decisions in Collaborative Knowledge Creation. European Journal of Operational Research, 170(1), 192–210. doi:10.1016/j.ejor.2004.06.024
Lee, H. L. (2001). Ultimate Enterprise Value Creation Using Demand Based Management. Stanford Global Supply Chain Forum, (September), 1.
Sardana, G. D., & Sahay, B. S. (1999). Strategic Supply Chain Management: A Case Study of Business Transformation . In Sahay, B. S. (Ed.), Supply Chain Management for Global Competitiveness (pp. 25–46). New Dehli, India: MacMillan India Ltd.
Lin, C., Hung, H. C., & Wu, J. Y. (2002). A Knowledge Management Architecture in Collaborative Supply Chain. Journal of Computer Information Systems, 42(5), 83–94. Lundvall, B.-A. (1999). Technology policy in the Learning economy . In Archibugi, D., Howells, J., & Michie, J. (Eds.), Innovation Policy in a Global Economy. New York: Cambridge University Press. doi:10.1017/CBO9780511599088.004 Lyon, A., Coronado, A., & Michaelides, Z. (2006). The Relationship Between Proximate Supply and Build-to-order Capability. Industrial Management & Data Systems, 106(8), 1095–1111. doi:10.1108/02635570610710773 McCluskey, M., Bijesse, J., & Higgs, L. (2006, August). Service Life Cycle Management. AMR Research, 1, 3. McKenna, R. (1991). Relationship Marketing: Successful Strategies for the Age of the Customer. Reading, MA: Addison-Wesley. Mohan, A. (2006). A Critical Analysis of Supply Chain Management Practices in Indian Fast moving Consumer Goods Industry. Doctoral Dissertation Report submitted at Faculty of Management Studies (FMS), University of Delhi, Delhi, India. Parvatiyar, A., Sheth, J. N., & Whittington, F. B. (1992). Paradigm Shift in Interfirm Marketing Relationships: Emerging Research Issue. Working Paper No. CRM 92-101, Emory University, Center for Relationship Marketing, Atlanta, GA.
1570
Shani, D., & Chalasani, S. (1992). Exploiting Niches using Relationship Marketing. Journal of Consumer Marketing, 9(3), 33–42. doi:10.1108/07363769210035215 Sheth, J. N., Gardner, D. M., & Garrett, D. E. (1988). Marketing Theory: Evolution and Evaluation. New York: John Wiley. Sheth, J. N., & Parvatiyar, A. (2000). The Domain and Conceptual Foundation of Relationship Marketing . In Sheth, J. N., & Parvatiyar, A. (Eds.), Handbook of Relationship Marketing (pp. 1–38). London: Sage Publication Inc. Sisodia, R. S., & Wolfe, D. B. (2000). Information Technology: Its Role in Building, Maintaining & Enhancing Relationships . In Sheth, J. N., & Parvatiyar, A. (Eds.), Handbook of Relationship Marketing (pp. 523–563). London: Sage Publication Inc. Stevens, G. C. (1989). Integrating the Supply Chain. International Journal of Physical Distribution and Materials Management, 19(8), 3–8. Stone, M., Woodcock, N., & Wilson, M. (1996). Managing the Change from Marketing Planning to Customer Relationship Management. Long Range Planning, 29, 675–683. doi:10.1016/00246301(96)00061-1
Achieving Supply Chain Management (SCM)
Tierney, S. (2003). Tune Up for Super Efficient Supply Chain with A Traingle. Journal of Logistic Management, 11(2), 45–56.
Webster, F. E. Jr. (1992). The Changing Role of Marketing in the Corporation. Journal of Marketing, 56(4), 1–17. doi:10.2307/1251983
Tyndall, G., Gopal, C., Partsch, W., & Kamauff, J. (1998). Supercharging Supply Chains: New Ways to Increase Value through Global Operational Excellence. New York: John Wiley & Sons.
Wenston, R. (1997). Domino’s Supply Chain Overhaul to Save Dough. Computerworld, 31, 50.
Vickery, S. K., Jayram, J., Droge, C., & Calantone, R. (2003). The Efffects of An Integrative Supply Chain Strategy on Customer Service and Financial Performance: An Analysis of Direct Vs. Indirect Relationships. Journal of Operations Management, 21(5), 523–539. doi:10.1016/j. jom.2003.02.002 Vries, E. J., & Brijder, H. G. (2000). Knowledge Management in Hybrid Supply Channels: A Case Knowledge study. International Journal of Technology Management, 20(8), 569–587. doi:10.1504/IJTM.2000.002882
Whipple, J. M., Frankel, R., & Daugherty, P. J. (2002). Information Support for Alliances: Performance Implications. Journal of Business Logistics, 23(2), 67–82. Wong, K. Y. (2005). Critical Success Factors for Implementing Knowledge Management in Small and Medium Enterprises. Industrial Management & Data Systems, 105(3), 261–279. doi:10.1108/02635570510590101 Woods, J., Peterson, W.K. & Jimenez, M. (2002, October). Demand Chain Management Synchronizes CRM & SCM. Gartner Research.
This work was previously published in Enterprise Information Systems and Implementing IT Infrastructures: Challenges and Issues, edited by S. Parthasarathy, pp. 304-322, copyright 2010 by Information Science Reference (an imprint of IGI Global).
1571
Section VII
Critical Issues
This section addresses conceptual and theoretical issues related to the field of enterprise information systems, which include issues related to customer relationship management, critical success factors, and business strategies. Within these chapters, the reader is presented with analysis of the most current and relevant conceptual inquires within this growing field of study. Particular chapters address the successes of enterprise resource planning through technology, and presents strategies for overcoming challenges related to enterprise system adoption. Overall, contributions within this section ask unique, often theoretical questions related to the study of enterprise information systems and, more often than not, conclude that solutions are both numerous and contradictory.
1573
Chapter 7.1
Preparedness of Small and Medium-Sized Enterprises to Use Information and Communication Technology as a Strategic Tool Klara Antlova Technical University, Czech Republic
Abstract The objective of this chapter is to emphasize issues connected with adoption of information and communication technology (ICT) as a strategic tool contributing to further organizational growth. This understanding is based on the results of a qualitative analysis of a group of small and mediumsized enterprises (SMEs). Gradual development of a group of 30 organizations has been monitored over the last fifteen years during their co-operation with the Technical University of Liberec. These organizations have hosted one-year student placements where students, as part of their Bachelor’s degree course, undertake a long term work experience enabling them to integrate the practical and DOI: 10.4018/978-1-60566-892-5.ch019
theoretical aspects of their course. The research focused on SME management’s approach to ICT, its utilization for competitive advantage and its relation to and defining of business strategy. Other aspects of the study looked at the effect of ICT on organizational performance, knowledge and skills of the employees, training and organizational culture. The results indicate that successful and growing companies have gradually established business, information and knowledge strategies and make strategic use of ICT.
Introduction The current business world is undergoing consistent change. To survive in the current competitive environment companies have to be able to respond
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Preparedness of Small and Medium-Sized Enterprises
to such changes. The ability to react quickly to these changes is itself becoming a competitive advantage for small and medium-sized enterprises compared to those that are unable to be as flexible be they other SMEs or larger companies. These companies also demonstrate their ability to work with large companies with which they create new partnership and strategic alliances. These adaptable companies working in global environment with the consistent threat of new competition and changing markets, legislation and suppliers, show it is necessary to possess data, information, knowledge and value the experience of their employees. The need for new knowledge and skills requires that employees improve their skills and knowledge continually; by increasing their intellectual capital employees contribute to consistent improvement in services and product innovation. The transformation of intellectual capital into new products and services, however, requires a new approach to management of the organization, a flexible organizational structure and the use of information and communication technologies. For many SME owners the business represents their lifestyle fulfilling their personal dreams and visions. The research suggests the approach of the owner, age, personality, experience, managerial skills, education, enthusiasm, etc. are important for growth of SME. The owner’s approach may be also influenced by mutual relations with other family members who participate in the ownership or run of the organization. Sometimes quite complicated family relations can be observed to have negative impacts on the running of the business. The structure of this chapter is as follows; the first part analyses ICT adoption in the group of thirty small and medium - sized companies while the second part looks at the most frequent reasons and problems cited as barriers of ICT adoption. Understanding these barriers can help to find better solutions for ICT adoption and implementation in SMEs and to show how successful, growing companies use ICT as a strategic advantage.
1574
Definition of Small and Middle-Sized Enterprises SME are considered as a homogenous group of business entities meeting certain criteria, such as turnover, headcount, but the reality is different. In fact it is better considered a heterogeneous group with differing needs and objectives. The European Union (EU) defines SMEs according to two sets of criteria. First a set qualitative criteria which are characterizes them as: independent leadership connected with ownership of the company, limited division of production and technologies, limited capital owned by one or several owners, focus on local markets, etc. The second a set of quantitative definitions based on comparison of economical factors, i.e. turnover, capital, number of employees. The definition used for the purpose of this analysis divides the companies according to the following criteria: •)>>
•)>>
•)>>
A medium-sized company is considered to be an organization with the number of employees between 50 and 250, with an annual turnover not exceeding € 50 million, or its yearly balance sheet not exceeding € 43 million. Small companies are defined as companies employing less than 50 employees, with an annual turnover or yearly balance sheet not exceeding € 10 million. Micro companies are defined as businesses employing less than 10 people, with an annual turnover or yearly balance sheet not exceeding € 2 million.
Benefits of SME for Economics Based on the long term survey of ICT using in European countries (E-business W@tch, 2007) the economic and social benefits of being a small and middle-sized enterprises are:
Preparedness of Small and Medium-Sized Enterprises
•)>> •)>> •)>>
•)>> •)>> •)>> •)>> •)>>
Ability to mitigate negative impact of structural changes. Ability to work as sub-contractors of large companies. Ability to create conditions for development and implementation of new technologies. Ability to create work opportunities under low capital investment. Ability to quickly adapt to requirements and fluctuation of the market. Ability to operate in marginal areas of the market that are not attractive for bigger companies. Ability to decentralize business activities. Ability to support fast development of regions, small towns and communities.
Negative Factors Affecting Business of SMEs The development of an SME is also influenced by its surrounding economic environment that impacts the demand for products and services. This either facilitates or limits access of SME to the markets that support its further wealth creation and growth. In addition managers of SMEs have to be able to respond flexibly to changes in their environment and wishes of their customers. To do this they need appropriate tools, not only knowledge of the employees, but also information and communication technologies. Previous research (Antlova, 2005) showed that potential growth and survival of SMEs is largely dependent on the environment surrounding the companies. Small and middle-sized enterprises are negatively affected by the following factors: •)>> •)>>
Low economic power comparing to large companies. Difficultly gaining access to capital with a consequent limited ability to finance development activities.
•)>> •)>> •)>> •)>> •)>> •)>> •)>> •)>>
Worse access to the specialized training and education compared to larger companies. Lower access to necessary information and consultancy services. Unfair competition from large companies and dumping prices of imported products. Limited sale of finished products on the domestic market and increased cost of export. Competition of retail organizations managed by financially strong companies. Weak position in public tenders. Failure to and delay in receiving payments resulting in secondary financial insolvency. High administrative demand from the government bodies and agencies.
Literature Review Existing literature review proposes major differences between SMEs and large organizations (Hekkila, 1991): •)>> •)>>
SMEs tend to use computers more as tools and less as a communications medium, SMEs have much fewer resources available to implement in ICT solutions.
Planning in a small firm has the following characteristics (Jeffocate, 2002): •)>> •)>> •)>> •)>>
Often done on an ad hoc basis. Frequently only a mental activity of the owner or manager. Informal, sporadic, and closed. Often relying on advice from random acquaintances with less skill and/or less experience than the owner himself.
SMEs have also particular problems in adopting and using ICT. They usually do not have the appropriate skills available in-house and thus have to train existing staff or purchase those skills in the marketplace (Valkokari & Helander, 2007).
1575
Preparedness of Small and Medium-Sized Enterprises
Data collection and methodology
But ICT must be associated with a systematic approach to management and decision making and its introduction requires careful planning (Kerney & Abul-Nour, 2004). Although the technology is much cheaper than before, it still represents a considerable investment for SMEs, that traditionally lack such funds (Levy, Powell & Yetton, 2002). The introduction of ICT, which may lead to dramatic changes in the business’s fundamental activities, requires an awareness and basic knowledge at the management function, but many owners of SMEs appear to be to busy “surviving” to invest time in such projects (Oh & Pinsonneault, 2000; Cocrill & Lewis, 2002). Therefore, there is a significant risk that such efforts to introduce ICT will be unsuccessful, and the cost of such failure may be fatal for the small firm lacking adequate financial and productive cushioning (Craig & Annear, 2003). It is not surprising that, many SMEs have avoided such risks by ignoring ICT (Gemino, Mackay & Reich, 2006). During the long term qualitative survey the approach to and use of ICT in small and medium companies has been monitored. The aim of the research is to find key factors influencing the preparedness for strategic use of ICT and to be better prepared for further education activities for owners or managers of SMEs.
The thirty companies (all from Czech Republic) have been analysed during qualitative research for the 15 years. The data has been collected through the interviews with the managers or the owners and also through the cooperation of students and their teachers on some real projects. The following Table 1 describes details of business areas of the organizations and their number of employees. During this long period SMEs developed and passed through the different levels of changes that impact their size and style of management. This development of SMEs is also discussed in the literature (Greiner,1972). Every stage of development is characterized by several key factors. This research has used the five stages model from Levy and Powell (Levy & Powell, 2005). These five stages of development are: •)>>
•)>>
•)>>
Commencement (focus on profit, necessity of transparency and acceleration of administration). Survival (increasing number of customers, higher need for data share inside of the company). Successful position in the market (competitive pressure, implementation of quality certificates, etc.).
Table 1. Organizations grouped by Area of Business and number of employee Business area (industry)
Total number of org.
Number of org. with 1-9 employees
Number of org. with 10-49 employees
Manufacturing
4
Building
4
Services
11
3
Logistic
2
1
ICT
6
1
3
5
14
Automotive
3
In total
30
1576
Number of org. with 50-199 employees
3
Number of org. with 200-250 employees
25% share of a foreign owner
1 2
2
4
1
2
3
6
5
7
8 1 2
Preparedness of Small and Medium-Sized Enterprises
•)>>
•)>>
Expansion (financial issues, electronic communication with customers and suppliers). Maturity (necessity of innovation, change in the management, training and education of employees).
These above mentioned stages are influenced by competitive pressure, changes of the company’s environment, necessity of managerial changes and also by a number of other internal and external factors. Therefore the questions for interview with the managers or owners were focused on the development of organizations and the identification of stages such as: •)>> •)>> •)>> •)>> •)>> •)>> •)>> •)>>
Market opportunities. Managerial experience. Surrounding environment of the company. New technologies. New products of competitors on the market. Legal environment etc. Cultural internal environment in the company. Approach to learning of employees.
The analyzed organizations were divided into five groups according to the level of their development during their business existence. Each group has its specific way of managing of the organization, its organizational structure, presence or absence of the corporate strategy, level of utilization of ICT, internal and external integration ICT supporting processes in the organization and way of utilization of knowledge of the employees. The objective of this division is to emphasize changes of the management information needs and better understanding their approach to ICT, the motivation and the barriers of ICT adoption. For the purpose of division the following parameters have been used, as they significantly contribute to acceptance or not of using ICT:
•)>>
•)>>
•)>>
•)>>
•)>>
Defining corporate strategy – direction of the organization over the long-term (financial situation of the organization, competitive environment and position in the market, parameters of the planning process and assumed development of the organization, way of managing and general culture in the organization). Defining of the information strategy: the status and expected development of utilization of ICT (internal communication of employees, using an internal computer network or internet, access from home, support of the management process). Knowledge management (focused on knowledge and skills of employees, way of sharing their knowledge). Innovation (investment into research and design, searching for new ways and possibilities of services and marketing). Communication with customers and suppliers including management of the supply chain and online ordering.
1st Stage of Growth: Commencement At the beginning of a company’s development the owners are able to manage it on their own and are familiar with details. Taking a more detailed look at this initial stage, we can observe very simple organization structure in this first period, employees and the owner have close relations together, strategic decisions are short-term and long-term strategic plan is missing. Investment into ICT is minimal, usually for the purpose of administration. The objective of the organization is mainly to generate profit and to maintain its position on the market. Gradual growth of the company is connected with transmission into the second stage that can be described as “survival”. This lowest level is represented by a group of companies where strategic objectives exist only in the minds of the owners or managers and can
1577
Preparedness of Small and Medium-Sized Enterprises
be often summed up as an effort to survive. It is common that a corporate strategy is not written and companies in this group run their business mainly in the area of services. Also the development in skills and knowledge of employees is neglected; employees are not motivated to improve their skills and knowledge. Analyzed organizations have the following characteristics: •)>> •)>> •)>> •)>> •)>>
•)>> •)>> •)>>
Lack of financial sources for purchase of ICT, training, etc. The corporate strategy can be described as “survival” and maintaining its position in the competitive environment of the market. Limited number of employees. Insufficient knowledge of ICT. Communication with the customers and suppliers only by e-mail, phone or in writing. Information support is by an office software package. Failure of the customers to comply with financial obligations. Often little specialization of individual associates with everybody doing what is presently needed.
still remains a key person for strategic decisionmaking. Simultaneously, the need for management changes is emerging. These circumstances lead the company into the third stage when it becomes established in the market. The survival stage is represented by a group of organizations with slight growth where the increasing number of customers drives the need to speed up administrative processes. Also the need for employees to share growing amount of data and the managers need better overview of customers’ orders. The owners in this group are already trying to search for and formulate their corporate strategy. The organization typically tries to establish itself in the areas with lower competition, such as in newly developing areas focused on specialized services requiring for instance environment-related certificates. Information strategy in such organizations is still not defined. Parameters of these organizations can be summed up as follows: •)>>
2nd Stage of Growth: Survival In this next level the majority of effort is devoted to maintaining stable group of customers with the emphasis on maintaining its position on the market. The strategic plan is still missing and information systems in the organization are usually simple (often a standard office software package). The owner, however, begins to have issues with maintaining his detailed insight into all orders and with the increasing number of employees. Gradually, as the number of orders increases together with the number of customers, employees, suppliers and partners, the owner has to delegate a number of tasks to others employees. Despite this the owner
1578
•)>>
•)>>
The organization aims to survive successfully in the competitive environment and possibly improve its position on the market. The corporate strategy is formulated with the objective to decrease cost and increase effectiveness, but some strategies are based on innovated special services responding to e.g. environmental requirements utilizing the benefit of a less competitive environment. The owners respond to increasing number of customers by the effort to multiply economic administrative activities striving for maintaining better overview of the financial situation and individual orders. Customers of this group are usually small and middle-sized organizations. Some organizations already tried to utilize at some applications of electronic business, e.g. electronic e-shop, or at least start to consider it.
Preparedness of Small and Medium-Sized Enterprises
•)>>
•)>>
The organizations typically have software applications for accounting and warehouse management. These organizations are very often owned by families and their relationships play key role in decision making, innovation and growth.
3rd Stage of Growth: Successful Position in the Market In the third stage of growth the company is successfully growing and the manager begins to undertake mid-term planning. In this phase of development, the further growth of the company significantly depends on approach of the manager or owner. The companies are forced to respond to market demands, wishes of the customers and have to be competitive, in order to avoid declining to an earlier stage. Consequently the managers need to have a vision for the organization and to share it with the employees. The need for strategic management is growing and simultaneously the necessity of possessing sufficient information about the company is increasing. This stage is connected with requirements for better utilization of ICT. Typically the companies utilize database of customers, accounting systems and warehouse system. In this group there are organizations trying to increase the number of customers and to respond flexibly to their needs and wishes. In this group of organizations are small manufacturing companies focused on quite special products, e.g. machines for crushing and processing of metal waste, special glass furnaces. These organizations have the following common characteristics: •)>>
•)>>
Using of ICT is based on applications such as CAD (Computer Aided Design – software design application), in addition to accounting and other administrative applications. These organizations are aware of the importance of ICT and often have an information
•)>>
strategy, within which they consider future integration of electronic shopping into their business model. The organizations have certain organizational structure, i.e. the owner has got coworkers participating in managing areas of the organization, e.g. commercial, marketing and manufacturing.
4th Stage of Growth: Expansion This fourth stage of growth or expansion is very hard for the SME, as the company is trying to be an important player in its business area. That is why this stage requires the owner or the manager to have experience of planning and management, as well as sufficient finance to enact these plans. There is also a requirement for increased internal and external communication is. These organizations are aiming to become important market players. The owners or managers have defined visions they wish to achieve and share them with their employees. With increasing number of employees there is a need for the owner to formalize the organization structure and to delegate responsibility. This is connected with the need to share visions and business strategies of the organization with a greater number of employees, this means taking into account more opinions, experience and knowledge, which is important for success and growth. This group already contains organizations, usually manufacturing facilities that are often a part of supply chain with the following characteristics: •)>>
•)>>
Standard ERP (Enterprise Resource Planning), systems for communication with their partners, so they utilize electronic exchange of data. The organizations have defined corporate and information strategy, they have hierarchic organizational structure and aim to optimize their processes and information support.
1579
Preparedness of Small and Medium-Sized Enterprises
•)>>
This group differs from the previous groups of organizations by higher utilization of knowledge of the employees.
5th Stage of Growth: Maturity To achieve further growth the organization has an increasing need for data and information to supporting planning, managing and strategic decision-making. Information is a strategic source determining the business success and providing data about customers, financial results, capabilities and opportunities for evaluating changes to business objectives. That is the only way the organization may ensure its development and growth. Consequently it needs effective tools, i.e. information system enabling the company to maintain, sort, analyze and search for data for the purpose of support of internal processes. The information system may now yield a competitive advantage compared to other companies. Investment in ICT requires a long-term strategic plan for the organization based on detailed analysis of the current status. The manager or the owner has to have clear vision of the expected outcome and benefit of ICT, this is can demanding on the knowledge of the managers or owners of SME. The purchase of information technologies creates a lasting obligation, as financial sources of SME are limited. Owners of the companies should recognize that information systems may strongly impact on capacity, strength and chance for survival of the company. Speed of technological innovation together with demanding implementation in the company environment support the serious need for planning the use of information technologies. To be successful here implies that each decision regarding information systems will conform to the wider business strategy of the company. This level of growth is represented by organizations that are significant market players. These organizations typically have higher number of employees (80-250), are managed by a team
1580
of managers and have hierarchic structure of leadership. The group differs from the previous groups especially with its focus on management of knowledge within the workforce. These organizations show effort to optimize internal and external processes. Two organizations run their business in the area of ICT and three remaining organizations are manufacturing companies in the building industry. Companies in this last group with their approach to using ICT and emphases on sharing of employees knowledge can be good role models for others companies. They have the following common characteristics: •)>> •)>> •)>> •)>> •)>> •)>> •)>>
•)>> •)>>
Existence of written corporate and information strategy. Matured level of ICT processes is typical. They are aware of the importance of knowledge of their employees. Access to the information system by employees from home. Willingness to support training of employees. A culture of innovative in the organization. Application of different management methods (Balanced Scorecard, ABC analyses, etc.). On-line communication with customers and suppliers. Using e-commerce (buying and selling on internet).
Contribution of the above mention categorization is in the identification of different approaches that managers or owners have to ICT adoption and its strategic use. The companies from the forth and fifth groups are successfully developing and growing in the long term horizon. The research into the analyzed companies also investigated the drivers for purchase of ICT. It was typical in these companies that the adoption of ICT was not a given reason to achieve the strategic advantage but the most frequent reasons cited were:
Preparedness of Small and Medium-Sized Enterprises
•)>> •)>> •)>> •)>> •)>>
Pressure from the suppliers, customers and competitors. Influence of the specific area of business. Size of the organization. Implementation of different quality certificates. Knowledge of the employees or owners.
Also the majority of the specified factors contributed to decision-making about acceptance of information and communicati on technologies are: •)>> •)>>
•)>>
•)>>
Technological factors (image of company, relative advantage, need of compatibility). Factors arising from the environment of the organization (competitive pressure of customers and suppliers, changes in the market place). Organizational factors (management, size of company, specialization of company, costs). Individual factors (knowledge of the manager, enthusiasm for ICT, innovation).
It is obvious that the factors mentioned are not all-inclusive and it is recognised that certain simplifications and judgments were applied. These analyzed factors can help to better understand the issues connected with ICT adoption. The next part of the chapter explains these factors in more detail.
Technological Impact Majority of the bigger companies of the analyzed group that have 25% share of the foreign investment were forced to change their current ICT as a requirement for harmonization of information and communication systems with the parent organization. These organizations had no choice in the selection of an appropriate information system and its supplier; both were nominated by the parent organization. Also management of the information system was carried out by the owning company,
usually abroad, with changes to the system, such as enrolling a new user, taking a longer time that might be expected. Some of the organizations also begin to realize the benefits of electronic business (selling and buying on internet) and trying to keep abreast with competitors. Another reason for implementation of ICT is gradual aging and insufficient capacity of existing hardware and software in the organization. The following Table 2 describes how many organizations use ICT for the purpose of communication within the company, with the customers, other options of electronic business, outsourcing, and training in ICT. The comparison brings significant contrast between utilization of electronic mail (used by 100% of all organizations of the analyzed sample) and other ICT applications, such as electronic supplying, acceptance of electronic purchase orders from customers and utilization of an intranet (internal web pages). The least utilized application is electronic supplying and outsourcing. While EDI type of communication with the partners is typically used by all organizations in the automotive industry, all bigger organizations have implemented standard ERP system and devote significant effort to training of the employees in ICT. In case of implementation of any other ICT application a number of companies do not consider further evaluation and monitoring of benefits of the chosen solution.
Impact of the Company’s Environment Very frequent reason for innovation or purchase of ICT is pressure from customers for mutual communication, in order to enable electronic data interchange (EDI). This is a way of electronic exchange of structured data (e.g. orders) on the basis of agreed standards between the information systems of individual business partners. Such pressure from the customers is common in both the automotive industry and between retail businesses.
1581
Preparedness of Small and Medium-Sized Enterprises
Table 2. Utilization of Individual Applications in the Analyzed Group of SME Business area (number of companies)
E- mail
Manufacturing
4
Building
4
Services
11
Logistic
2
ICT
6
Automotive
3
EDI
ERP
El. supply
Intra-net
Online orders
Outsour-cing
ICT training
CAD
3 4
2 1
1
1
1 4
3
3
6 3
1
customers, company culture, suppliers, competitors, systemic view, external conditions, natural environment and fortuitous factors.”
The reason for acceptance of EDI applications by the organizations is to prevent the loss of the current customers and also to attract new ones. Another reason for innovation of ICT is the frequent need of SME to implement applications for bar code data capture. In these cases the purchase of applications is typically focused on solving current problems and not on further ICT usage across other areas of the organization, such as in marketing or knowledge management.
This definition is based on a long-term research of Prof. Mulej (2007, p. 39) from University of Mariboro in Slovenia, who is intensively involved in the area of SME in the post-communist countries. This definition implies that some factors can not be influenced completely, such as the natural environment, but on the other hand it is necessary to respond to them in time. The following Table 3 specifies percentage share of investment into research in the turnover of the analyzed organizations (as at the end of year 2006). The table shows that especially small organizations do not possess sufficient finance for searching for new innovative products or services. In this area universities and grant supported by European Union may significantly help.
Innovation Innovation, flexibility and the ability to promptly respond to the wishes of customers are one of key advantages of small and middle-sized companies and simultaneously a condition of long-term competitiveness of an organization. “Innovation represents invention multiplied by business creativeness, management, co-operation,
Table 3. Share of Investment into Research in the Turnover Number of Employees in 2007
Share of investment in % 0 (number of companies)
Share of investment in % 0.1-1 (number of companies)
1-9
4
10-49
9
2
50-199
3
3
16
5
200-250 In total
1582
Share of investment in % 1.1-5 (number of companies)
Share of investment in % above 5 (number of companies)
4 3
2
7
2
Preparedness of Small and Medium-Sized Enterprises
Organizational Factors Small companies are characterised by simple organizational structures. This may be considered a positive factor when implementing information systems. Another special advantage of this flat organizational structure is the relative simplicity in the analysis of the organization and the requirements to adjust the information system to the needs of the corporate strategy (if it has been defined) generated by the SME’s owners and managers. Since the Czech Republic has joined the EU there has been an increased need for small and middle-sized enterprises to obtain a variety of certificates (such as those for safety, quality and environmental waste management). As a result small organizations have been looking for ways to keep records of all necessary documents connected with production, customers, safety at work, etc. This demand can be solved by purchasing new ICT applications, but unfortunately they seem not to exploit their further possibilities. Another reason for ICT purchase is an increasing number of customers, growth of the organization and a need to have an overview of all orders. This was found to be especially so within small organizations of the analyzed group, an effort to decrease costs is another frequent reason for implementation or innovation of information and communication technologies.
decision-making process is strongly driven by the parent organization’s company abroad. These organizations, with foreign owners, have access to financial resources to purchase new manufacturing or information technologies or to undertake different marketing activities or training projects. The way ICT is accepted in the company mainly depends also on previous training and experience of the managers and users. In small companies the training of employees is often insufficient and attitude towards information technologies is more sceptical. Expectations based on increased productivity are in general higher than expectations connected with improved effectiveness, and also higher than real results. However, unlike with large companies, in smaller SMEs the whole information system or its applications are often dependent on a single individual who often has to act in isolation. Small companies are managed often only by their owner and strategic decisions are made more on intuition than analysis, and often depend on complex family and property-related matters. What makes SME managers invest in information and communication technologies or their innovation? There are numbers of reasons, but the following are the most frequent:
Management in Small and Middle-Sized Enterprises
•)>> •)>> •)>> •)>> •)>>
Success and competitiveness in SMEs are influenced by a combination of business capabilities of the owner, his visions and strategies, and the ways he chooses to reach his visions. Further important factors are also market impact, flexibility of the employees, ability to innovate, sufficient number of customers and independency in decisionmaking. Another situation we can see in the case of 7 organizations from the observed group that have 25% share of the foreign capital, is that the
The above specified reasons are also influenced by the style of management in an SME. As already mentioned in the previous section, the manager of the small or middle-sized company plays a key role in managing the enterprise, and has a much bigger personal influence as could be had in a large organization. This applies both to management and strategic planning. On the other hand, the managers or owners of SME do not prepare
Respect of the managers or owners to ICT. Financial situation. Pressure from the customers or suppliers. Implementation of certificates standards. Enthusiasm of the owner for new technologies.
1583
Preparedness of Small and Medium-Sized Enterprises
long-term strategic plans. The typical features of management in SME are following: •)>> •)>>
•)>> •)>>
Management style: Autocratic or directive. Decision-making: insufficient delegation of authority and insufficient purposeful planning, combination of strategic and operational decision-making, ad hoc decisions. Time horizon: Short-term. Internal environment: Absence of formal organization structure, management and information systems, high level of uncertainty, insufficiently shared information, absence of standard rules and procedures, usage of subjective criteria (missing formalized system), poor integration activities, poorly defined working procedures, roles and responsibilities.
Despite the above SME owners and managers have to solve problems equal to the problems of bigger companies often without the supporting of knowledge of associates from individual departments (such as ICT, marketing). That is why the managers make decisions in much wider context, i.e. on horizontal and vertical levels. This implies that information needed for decision-making by managers of SME is much more important than information for managers in large companies. As already mentioned, prosperity of SMEs is significantly influenced by experience, knowledge, relations and charisma of the owner or manager. The style of management is very important for success and growth of the company. We can observe several different managerial styles. For example authors Covin & Slevin (1990) describe relation between managing and utilization of ICT. They distinguish business and conservative managerial styles in connection with organic (open) and mechanistic (bureaucratic) structures of the organization. Individual organizations can be divided into the four following groups:
1584
1. )>> Type of organization specified by proactive way of management with respect to ICT and willingness to innovate ICT. 2. )>> Type of organization with conservative style of management and approach to ICT. 3. )>> Type of organization respecting ICT with open, flexible and communicative managerial style. 4. )>> Type of organization with organic structure with respect to ICT. Results from this research suggest that companies of third type can not be found among small organizations, but only amongst bigger ones. Companies at the edge between “2” and “4” types are typified by transfer to a new managerial style and innovation of information system, often caused by the new incoming management or by young members of the family having graduated university joining the business. Companies typically demonstrating an enterprise style of management that are making changes in organizational structure are found between “1” and “2” types. Based on the research results these are often companies successfully utilizing information technologies thanks to which they have grown quite quickly (e.g. companies doing their business on the internet) and currently searching for ways to efficiently manage and run a growing company. It is accepted that differences between management, organizational structure and the approach to ICT of individual companies have been simplified some way. Complexity in individual organizations with all factors and their impacts is much more complicated.
Investment into ICT SME often tend to utilize information technologies only as a tool for data processing, not as a means of sharing knowledge or strategic advantage. During the implementation of ICT they do not consider their current organizational structure and
Preparedness of Small and Medium-Sized Enterprises
possibilities of making changes. Furthermore they typically rely on short-term contracts with suppliers. Benefits of ICT for SME can be observed in the following areas: •)>> •)>>
•)>> •)>> •)>> •)>> •)>>
Higher productivity and performance of the company. Possibility of new organizational forms, e.g. development of business nets, participation in supply chains. Increased added value of the product or services. Entry to new markets. New products or services, changing business processes. Utilization of new business channels. Responding to new business activities of competitors.
Simultaneously, the advantage of SMEs compared with large companies may be higher flexibility during implementation of information technologies and during promoting of necessary changes arising from the implementation. ICT nowadays largely help small and middle-sized companies in traditional areas, such as warehouse management, payment procedures, administration, sales and improvement of post-sale services. Unfortunately, in real life ICT are not widely used in the areas such as marketing, purchasing and managing relations with customers. This is supported by insufficient knowledge of ICT capabilities and the inability to quantify benefits of ICT for the organization; this is often connected with the absence of corporate and information strategy. It is necessary to acknowledge that SME usually do not have adequate number of appropriate and experienced employees. Information systems should be connected with a systematic approach to management and planning, while management of SME is based on ad hoc decision-making and short-term planning. Although prices of ICT in certain areas decrease, the costs for SME are still significant. As implementation of ICT often results
in dramatic changes, it requires additional support from management and adequate knowledge. Many owners of the companies are however busy with “the survival of the company” and do not invest time into such projects. They are also afraid of possible risk of failure of the project, do not wish to risk finance and can not see future advantage of ICT. That is why a number of methods were designed that aim to create supporting framework providing the management with guidelines how to proceed during planning of information systems or its innovation. As financial capabilities of small and middlesized companies are quite limited, the investment into ICT is often dependent on size of the organization (see following table 4). Despite of current options to utilize loans from European funds, SME are afraid of payback from such projects with managers or owners being risk averse in this context. This situation can be solved by software applications, such as “open source” (software with open source code means both the technical availability of the code and its legal availability or license, that enables users, upon meeting certain conditions, to utilize the source code and to modify it). This software is often freely accessible on the internet. Implementation of such software solutions is, however, demanding in terms of knowledge required by those implementing these systems which is usually absent in SME. Frequent reason for purchase of ICT applications is to quickly respond to the needs of the customers (e.g. obtaining of bar codes of the goods, requirement for electronic communication, etc). Apart from that, the size of the organization plays an important role in utilization of ICT as development of the organization increases demand for administrative systems if the company is to remain cohesive.
Individual Factors A frequent reason for innovation of ICT or purchase of new application is the cited dependency
1585
Preparedness of Small and Medium-Sized Enterprises
Table 4. Share of Investment in ICT in Analyzed Organizations Comparing to the Turnover of Individual Organizations in % from the year 2006 Headcount
Number of companies with no investment in ICT
Number of companies with investment 0.1-1%
1-9
4
10-49
6
5
50-199
4
2
200-250
of the current system solely on one person who possesses the necessary knowledge and experience. This is usually an employee who developed or implemented the application. Due to shortage of documentation and required compatibility with other applications; the owner recognizes this handicap and searches for alternative solutions and remove the dependency on a single associate. Another reason for acceptance of ICT may be even enthusiasm and interest in information technologies on the part of the owner or his employees and the need for new and innovative services or products requiring new communication and selling channels utilizing ICT. An example may be also searching for competitive advantages in new areas, e.g. the need to achieve certification in areas such as environmental waste management and related search for new products and services corresponding to these certificates. Gaining certificates requires maintaining very precise documentation that it is not possible without ICT. Another additional reason may be searching for market gaps, the need to be in close contact with customers and to respond to their specific wishes.
Skills and Knowledge in ICT Area Competitiveness of SME is strongly dependent on availability of company resources. This issue is explained by resource based theory which emphasizes those sources that bring important quality to a business that are hard replicate by its competitors. Such sources encompass especially
1586
Number of companies with investment 1.1-5%
Number of companies with investment above 5%
2
1
4
1
long-term process of training and education and the culture of the organization. For example, authors Peppard & Ward define in their article (2004) company sources as “all assets, abilities, company attributes, organization processes, information and knowledge.” Skills of the employees in the area of ICT are on higher level especially in bigger organizations, where ICT is commonly used. The following table 5 details results of a questionnaire inquiring about average level of ICT literacy. The questionnaires were distributed to two thirds of employees in each organization. The respondents were to evaluate their ability to recognize and formulate their information needs, their overview of information sources, ability to search for information using ICT, to analyze such information and apply it for solving specific realistic situations or specific tasks. In particular, the questionnaire ascertained the following skills: •)>>
•)>> •)>> •)>> •)>>
Minimum skills: e.g. sending an e-mail, searching for a file or information on internet. Ability to create a folder, search for and copy a file, re-name a file. Working skills when using application software. Work with office package (spreadsheet, word processor, exporting files). Installation of simple application to PC (such as anti-virus program).
Preparedness of Small and Medium-Sized Enterprises
Table 5. Average ICT Literacy of Employees in the Analyzed Group of Organization Headcount in 2007
Average ICT literacy Level 1 (No. of companies)
Average ICT literacy Level 2 (No. of companies)
1-9 10-49
5
4
50-199
4
6
200-250
5
The ability was ranked from 1 to 5 using Lickert’s scale, where 1 = the best result and 5 = the worst result.
Barriers of ICT Adoption in Small and MediumSized Enterprises The most significant barriers of ICT purchase are mainly internal issues of the organizations, such as shortage of associates with appropriate knowledge, financial and often family reasons. Similarly to the division of individual factors contributing to acceptance of information and communication technologies, in the following section indicates the cited barriers to ICT adoption in SME. They are: 1. )>> Technological barriers (problems of security, insufficient infrastructure). 2. )>> Organizational barriers (management style, shortage of financial sources). 3. )>> Barriers arising from the surrounding environment (insufficient knowledge of the market). 4. )>> Individual barriers (Insufficient knowledge, personal relations in organization).
Technological Barriers The biggest barrier of utilization of new information and communication technologies is, apart from insufficient infrastructure in the organiza-
Average ICT literacy Level 3 (No. of companies)
Average ICT literacy Level 4 (No. of companies)
1
3
1
1
Average ICT literacy Level 5 (No. of companies)
tion, the fear regarding security of internal data. This fear is sometimes a reason for non purchase of ICT from a well-established provider. Some organizations consequently try to design such applications internally although this solution is not always successful. The employees working on this task often lack sufficient knowledge and experience and also unable to document their solution, which can bring some problems in the future. Another barrier may be caused by fear from financial demand but this can be resolved by purchase of application information services using an external supplier.
Decision-Making in SME One of significant barriers of ICT acceptance in small and middle-sized companies is resistance to organizational changes, especially in connection with older managers or owners. Another barrier may be missing long-term corporate strategy often omitted due to shortage of long-term orders and stable customers. Companies frequently have to respond quickly to individual demands of random customers and do not consider any long-term corporate strategy. That is why planning in such organizations is focused on “sole survival” and on short-term activities. The managers or owners of SME make their decisions on the basis of current need and the current situation. Consequently management processes are very sensitive to market behavior, changing external conditions and market trends.
1587
Preparedness of Small and Medium-Sized Enterprises
Time horizon of decision-making in SME is typically short-term, usually in a form of respond to specific event rather than targeted assumptions. Low level of detailed planning often causes issues during implementation and utilization of information systems. Moreover, only a small percentage of leaders of small companies utilize different methods of forecast, financial analysis, and project management. These results are also supported by a study (Ghobadian & Oregon, 2006) analyzing 276 small and middle-sized companies in England. Decision-making process of the managers is rather intuitive, based on instinctive decisions and less dependent on formal models of decision-making. They tend not to pass information and not to delegate decision-making authorities to their inferiors. They are often the only people in the company who have the authority, responsibility and access to the information necessary for identifying business opportunities including utilization of information technologies for strategic and competitive purposes.
Surrounding Environment Barrier preventing wider acceptance of ICT in small and medium-sized companies is furthermore influenced by an inability to apply ICT in relations with customers and suppliers. The fact that SME do not influence their business-specific surrounding environment, but they are influenced by it and particularly by their customers is an important issue.
Individual Factors One of the main barriers preventing acceptance of ICT, especially by small organizations is knowledge and skills regarding information technologies. Small companies do not have ICT departments (except for organizations with higher number of employees in the analyzed sample) and rely on either external consultants or friends. A role of such a consultant is not always fully
1588
understood which leads to a number of mutual misunderstandings during specification, purchase and implementation of ICT applications. This problem is connected to a missing information strategy and as previously mentioned an insufficient knowledge of ICT on the part of the owner or manager of the organization. According to this research where the analyzed sample were often using employing less experienced students, owners typically searched for simple and cheap solutions using their own resources, relatives or friends. Based on such solution different problems connected with lack of experience and specific knowledge arise. These solutions do not bring expected benefits. It is often only a “quick fix” and unfortunately a short-term solution of a given issue forgetting about further possibilities of utilization of ICT. This is connected with short-term planning of the organizations. ICT consequently can not contribute to increased competitiveness and becomes only a tool for cost decrease and minimization of administrative burden. In order to remove this barrier, universities may contribute high-quality knowledge to managers and owners of SME by providing education and training in the area of management and ICT. Other option would be utilization of specific ICT knowledge and skills in a co-operation with other organizations or business networks.
Recommendations and trends Technology availability has increased dramatically during the past ten years as a part of the internet phenomenon, mobile applications and the consumer electronics movement. The simplicity of technology solutions has provided users with the ability to make their own choices rather than rely on ICT staff. Now, personal e-mail packages, instant messaging, laptop computers, mobile devices, personal IP (internet protocol) based telephony (Skype) and even personal networking and storage preferences are becoming commonplace.
Preparedness of Small and Medium-Sized Enterprises
Software as a service (from external supplier) is becoming a viable option which supports wider ICT adoption in SME. This option removes the need for trained specialists and brings cost-saving. Its benefit is comparably quick implementation, professional technical support and certainty of high-quality backup of the data. Wide implementation of this option now is prevented especially by SME not fully trusting external data storage and low levels of trust in outsource providers and suspicions regarding risk of their bankruptcy. The growing presence of open source software cannot be ignored, so the use of open-source technology will become a common place strategy in more and more organizations. A big advantage of this solution is the minimal level of investment. But on the other hand it requires good ICT knowledge and experience to be had by employees. Information and communication technology as a strategic tool is one of the most significant impacts on enterprise in the near future and it does not matter whether they are big or small. Companies have to think about the using this strategic factor in their corporate strategies if they want to be competitive. These strategies will have to integrate the communication with the customers and their data with product information as a part of an information strategy. Also the communication and sharing of knowledge inside company is more and more important and plays a key role in competitive strategies. Organizations need to quickly respond to consistently changing conditions, innovation of products and services, which requires employees observing new trends, technologies to be able to improve their knowledge. Such organizations are classified as learning or intelligent organizations. These learning organizations (Schwaninger, 2006, p.7): •)>> •)>>
Are able to adapt to changing conditions. Influence and are able to change their environment.
•)>>
Contribute to development of the whole organization.
It does not mean only learning new things, but also to be able to learn from mistakes, in order to prevent re-occurrence of the mistakes. That is why the experience and knowledge needs to be shared across the organization and readily available to all employees. Again appropriate use of ICT is important.
Conclusion SME are exposed to high competitive pressure. When they wish to survive the current business competition, they have to search for new business opportunities. This effort has to be significantly supported by information and communication technologies. But the implementation of ICT can cause a number of issues for SME, such as insufficient financial sources, lack of experience with ICT and insufficient knowledge and skills in the area of computer literacy of employees. That is why the most frequent purpose of implementation of ICT in SME is, as supported by this research, is on survival of the organization in its competitive environment. Apart from that, adoption of ICT in the organizations is strongly influenced by the managerial style of the owner or manager. That is why motivation to purchase and implementation of ICT is also connected to clarification of ownership relations and the authority of individual owners. Successful performance of SMEs in the business environment and their consequent development is influenced by the ability of the organization to respond flexibly to customers demand and by the ability to innovate in products or services. That is why the owner or manager should from all of the searched groups of companies consistently re-evaluate and search for appropriate corporate strategy, to keep developing himself and his employees, monitor competitive environment of the
1589
Preparedness of Small and Medium-Sized Enterprises
market and be familiar with demand and wishes of customers. This consistently repeated process is illustrated by the following Figure 1. All specified activities require the support of adequate tools are currently available within the domain of information and communication technologies. For ICT to become one of the tools of competitive advantage, organizations will have to have a clear vision of the future and how to reach it. Current information and communication technologies enable a whole range of new business opportunities and are consistently upgraded but especially for the owners of SME it is not easy to keep abreast. Adoption of ICT is connected with higher investment demands that often create barriers to wider acceptance of ICT in small and middle-sized enterprises. Another issue may be also the fact that financial benefit and payback of ICT is not easily quantifiable without specific knowledge. Current business entities are forced to consistently improve their products and services. They have to utilize information and communication technologies and modern management methods. This is the only way they can succeed in such
competitive environments. Companies have to search for appropriate business strategies using an approach that reflects its own characteristics and to use as many benefits of ICT in the proposed business strategy as possible. Organizations are more and more connected with their suppliers and customers but yet need not loose their legal identity. They have their own culture, managerial style, they search for their own business strategies and should seek to share management decisions with their co-operating partners and customers. Managers or owners of SME are, however, often afraid of organizational and financial demand of implementation of ICT. This fear can be prevented by adequate strategic planning and preparation. From the successfully growing companies we analyzed we can see the importance of business, information and knowledge strategy. Without articulation of these strategies companies will find it difficult to find their way in the current business environment. These strategies have to be followed by other supporting strategies, i.e. marketing, finance and human resources. It is highly important that even those supporting strategies are in mutual harmony and support the defined global business strategy.
Figure 1. Relation between business and information strategy
1590
Preparedness of Small and Medium-Sized Enterprises
References Antlová, K. (2007). Strategic use of ICT in small businesses. In M. Munoz (Ed.), E-activity and leading technologies, 3, 414-418. Choi, B., & Lee, H. (2003). An empirical investigation of KM Styles and their effect on corporate performance. Information & Management, 40, 403–417. doi:10.1016/S0378-7206(02)00060-5 Cocrill, A., & Lewis, R. (2002). Going global – Remaining local. International Journal of Information Management, 22, 195–209. doi:10.1016/ S0268-4012(02)00005-1 Covin, J. G., & Slevin, D. P. (1990). Juggling entrepreneurial style and organizational structure. Sloan Management Review, 43–53. Craig, A., & Annear, J. (2003). A framework for the adoption of ICT and security technologies by SMEs. 16th Conference Small Entrerprise Association of Australia, Ballarat.
Heikkila, J. (1991). Success of software packages in small businesses . European Journal of Information Systems, 1(1), 159–169. doi:10.1057/ ejis.1991.31 Jeffocate, J. (2002). Best Practice in SME Adoption of E-commerce. Benchmarking: an International Journal, 9(2), 122–132. doi:10.1108/14635770210421791 Kerney, S., & Abdul-Nour, G. (2004). SME and quality performance in networking environment. Computers & Industrial Engineering, 46, 905–909. doi:10.1016/j.cie.2004.05.023 Kerste, R., Muizer, A., & Zoetermeer, A. (2002). Effective knowledge transfer to SMEs. Strategic Study B200202. Levy, M., & Powell, P. (2000). Information strategy for SME: An organizational perspective. The Journal of Strategic Information Systems, 9, 63–84. doi:10.1016/S0963-8687(00)00028-7
E-Business Indicators. (2007). A pocket- book.
Levy, M., & Powell, P. (2005). Strategies for growth in SMEs. Oxford: Butterworth Heinemann.
Gemino, A., Mackay, N., & Reich, B. H. (2006). Executive decision about ICT adoption in SME. Journal of Information Technology Management, 17(1).
Levy, M., Powell, P., & Yetton, P. (2002). The dynamics of SME information systems. Small Business Economics, 19, 341–354. doi:10.1023/A:1019654030019
Ghobadian, A., & Oregan, N. (2006). The impact of ownership on small firm behavior performance. International Small Business Journal, 24(6), 555–586. doi:10.1177/0266242606069267
Martin, L. M., & Matlay, H. (2001). Approaches to promoting ICT in SME. Internet Research: Electronic Network Applications and Policy, 11(5), 399–410. doi:10.1108/EUM0000000006118
Greiner, L. E. (1972). Evolution and revolution as organisations grow. Harvard Business Review, July/August, 37-46. Chau, S.B., & Turner, P. (2002). A four phase model of EC business transformation amongst SME. In Proceedings of the 12th Autralasian Conference on Information Systems, Australia.
Mulej, M., & Potocan, V. (2007). Transition into an innovative enterprise. University of Maribor, Slovenian. Nolan, R. L. (1979). Managing the crises in data processing. Harvard Business Review. OH, W., & Pinsonneault, A. (2007). On the assessment of the strategic value of information technologies. MIS Quarterly, 31(2), 239–265.
1591
Preparedness of Small and Medium-Sized Enterprises
Peppard, J., & Ward, J. (2004). Beyond strategic information systems: Towards an IS capability. The Journal of Strategic Information Systems, 13, 167–194. doi:10.1016/j.jsis.2004.02.002 Powell, P., Levy, M., & Duhan, S. (2001). Information system strategies in knowledge-based SMEs. European Journal of Information Systems, 10, 25–40. doi:10.1057/palgrave.ejis.3000379 Rivard, S., Raymond, L., & Verreault, D. (2005). Resource-based view and competitive strategy. The Journal of Strategic Information Systems, 20, 1–22.
Schwaninger, M. (2006). Intelligent organizations. Berlin: Springer. Tetteh, E., & Burn, J. (2001). Global strategies for SME. Logistic Information Management, 14(1), 171–180. doi:10.1108/09576050110363202 Valkokari, K., & Helander, N. (2007). Knowledge management in different types of strategic SME network. [f]. Management Research News, 30(7), 597–608. doi:10.1108/01409170710773724
This work was previously published in Enterprise Information Systems for Business Integration in SMEs: Technological, Organizational, and Social Dimensions, edited by Maria Manuela Cruz-Cunha, pp. 342-361, copyright 2010 by Business Science Reference (an imprint of IGI Global).
1592
1593
Chapter 7.2
Doing Business on the Globalised Networked Economy: Technology and Business Challenges for Accounting Information Systems Adamantios Koumpis ALTEC S.A., Greece Nikos Protogeros University of Macedonia, Greece
Abstract In this chapter the authors present a set of challenges that are to be faced by accounting information systems. More specifically, these include the support of interoperable accounting processes, for virtual and networked enterprises and for open-book accounting as well as the creation of novel interface metaphors that will automate and increase the usability of accounting information systems, and last but not least the provision of integrated e-accounting platforms.
INTRODUCTION As the modern economy depends more and more on information and communication technologies (ICTs), interest in the economic impacts of these DOI: 10.4018/978-1-60566-856-7.ch004
technologies is growing. The combination of economic fundamentals triggered a lively public debate on the underlying causes and consequences. The introduction of the World Wide Web and browsers fuelled the growth of the Internet – reaching millions of users worldwide. Paralleling the growth in the number of users was a growth in the number of enterprises wishing to serve this new “online” population. New ideas and new business models were introduced and investors were happy to pour money into them irrespective of actual profit figures. Many of the new firms went public and prices in the high tech segments of the stock markets soared. Moreover, companies related to Internet infrastructure, computers and software became all the more important. According to (Coman and Diaconu, 2006) and (Diaconu, 2008) globalization is a historical process, which has been created as a need of improving the resource allocation and to develop bigger markets
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Doing Business on the Globalised Networked Economy
for the global economy. Ideas about going global can be found in Adam Smith’s and David Ricardo’s works, going through Marx vision about the phenomena until our ages. We can consider it as one of the biggest social processes which the humanity has facing since ever. That’s why its impact in the global economy is huge and the accounting sector which is playing a vital role in the information process of the society is very important. That is why one of the main international accounting processes on the actual period is the harmonization of the national accounting systems. The harmonization process is influenced by several factors like culture, politics, economy and also sociological behaviors. Furthermore, in an increasingly competitive, knowledge-based economy, intangible assets, such as brand awareness, innovation, and employee productivity, have become the key determinants of corporate success. And given that the investments companies make to build those intangible assets such things as advertising, employee training, and R&D - are flushed through the income statement, balance sheets are increasingly a poor reflection of the value of companies’ businesses. And in contrast to the traditional accounting system that is focused on isolated transactions and historical costs, to determine the future value of a company, one should not only look at past history, but need to employ new measures to project forward. In our paper we present some ideas that aim to leverage research efforts in the area of Accounting Information Systems. We position our ideas with respect to ongoing developments in the research fields of accounting, business and computing. The increase of the corporate knowledge capital and the sustainable support of the agility potential of companies is not only a matter of how much intelligence a company shall exhibit in organizing its business related activities but also in the way it shall exploit its accounting infrastructure to respond to existing challenges of the globalised and networked economy.
1594
THE MARKET FOR ACCOUNTING SOFTWARE An accounting information system (AIS) is the system of records a business keeps to maintain its accounting system. This includes the purchase, sales, and other financial processes of the business. The purpose of an AIS, as it has been historically defined through various commercial system implementations, is to accumulate data and provide decision makers (investors, creditors, and managers) with information to make decision. While this was previously a paper-based process, most modern businesses now use accounting software. In an Electronic Financial Accounting system, the steps in the accounting cycle are dependent upon the system itself, which in turn are developed by programmers. For example, some systems allow direct journal posting to the various ledgers and others do not. Accounting Information Systems provide efficient delivery of information needed to perform necessary accounting work and to assist in delivery of accurate and informative data to users, especially those who are not familiar with the accounting and financial reporting areas itself. Furthermore, accounting software is typically composed of various modules, different sections dealing with particular areas of accounting. Among the most common are:
Core Modules •)>> •)>> •)>> •)>> •)>> •)>>
Accounts receivable: where the company enters money received Accounts payable: where the company enters its bills and pays money it owes General ledger: the company’s “books” Billing: where the company produces invoices to clients/customers Stock/Inventory: where the company keeps control of its inventory Purchase Order: where the company orders inventory
Doing Business on the Globalised Networked Economy
•)>>
Sales Order: where the company records customer orders for the supply of inventory
Non Core Modules •)>>
•)>> •)>> •)>> •)>> •)>> •)>> •)>>
Debt Collection: where the company tracks attempts to collect overdue bills (sometimes part of accounts receivable) Electronic payment processing Expense: where employee business-related expenses are entered Inquiries: where the company looks up information on screen without any edits or additions Payroll: where the company tracks salary, wages, and related taxes Reports: where the company prints out data Timesheet: where professionals (such as attorneys and consultants) record time worked so that it can be billed to clients Purchase Requisition: where requests for purchase orders are made, approved and tracked
Of course, different vendors use different names for these modules but it all comes around the same modules as mentioned above. Most midmarket and larger applications are sold exclusively through resellers, developers and consultants. Those organizations generally pass on a license fee to the software vendor and then charge the client for installation, customization and support services. Clients can normally count on paying roughly 50-200% of the price of the software in implementation and consulting fees. Other organizations sell to, consult with and support clients directly, eliminating the reseller. The most complex and expensive business accounting software is frequently part of an extensive suite of software often known as Enterprise resource planning or ERP software. These applications typically have a very long implementation
period, often greater than six months. In many cases, these applications are simply a set of functions which require significant integration, configuration and customisation to even begin to resemble an accounting system. The advantage of a high-end solution is that these systems are designed to support individual company specific processes, as they are highly customisable and can be tailored to exact business requirements. This usually comes at a significant cost in terms of money and implementation time. As technology improves, software vendors have been able to offer increasingly advanced software at lower prices. This software is suitable for companies at multiple stages of growth. Many of the features of mid market and high-end software (including advanced customization and extremely scalable databases) are required even by small businesses as they open multiple locations or grow in size. Additionally, with more and more companies expanding overseas or allowing workers to home office, many smaller clients have a need to connect multiple locations. Their options are to employ software-as-a-service or another application that offers them similar accessibility from multiple locations over the internet. With the increasing dominance of having financial accounts prepared with Accounting Software, as well as some suppliers’ claims that anyone can prepare their own books, accounting software can be considered at risk of not providing appropriate information as non-accountants prepare accounting information. As recording and interpretation is left to software and expert systems, the necessity to have a systems accountant overseeing the accountancy system becomes ever more important. The set up of the processes and the end result must be vigorously checked and maintained on a regular basis in order to develop and maintain the integrity of the data and the processes that manage these data.
1595
Doing Business on the Globalised Networked Economy
CHALLENGES Interoperability of Accounting Processes Interoperability is today a key issue because of the need of meaningfully connecting several types of autonomous entities i.e. systems, people, software applications, enterprises. Establishing interoperability between pre-defined or dynamically occurring requires infrastructures and theories. Infrastructures can be based on open services facilitating the realisation of interoperability. Today, however, such services focus on the syntactical and message passing levels of interoperability, paying too little attention to establishing and maintaining shared meaning. Theories are required to guarantee interoperability on the base of consistent relations among the various entities. The aim of a new type of an AIS is to put in practice the most advanced accounting research visions and earlier software engineering and computer science results concerning both these services and these theories. Specifically, such an AIS will demonstrate the powerfulness of upper ontology (used to breaking down the barriers between distinct domains) to realise universal accounting services for establishing business process interoperability swithin a globalised and networked economy. The result will be a set of prototype web services that enable interoperability between other (regular) web services by mapping various enterprise accounting models, concepts, queries etc. between distinct semantic domains, aided by emerging technologies for modelling language and ontology integration. Such a prototype could be used to demonstrate the integrated usage of upper ontology and domain ontology. For example, while accounts are usually distinguished in five different types, namely assets, liabilities, equity, revenues and expenses, this categorisation can be used to drive the separation of functional units within an AIS but can in no
1596
case form the basis for a meaningful and sensemaking conceptual organisation. Such web services may form the basis not only for enterprise-wide accounting but also for crossenterprise accounting operations taking the form of virtual enterprise accounting. We elaborate on this in the next section.
Accounting for the Virtual and Networked Enterprise A virtual enterprise is defined as a temporary alliance where different companies that are positioned complementary or supplementary along the business activities related value chain are combining their strengths to provide a specific service traditionally provided by a single enterprise. They come together to share competencies and resources in order to better respond to business opportunities, whose cooperation is supported by computer networks and adequate IT tools and protocols (Putnik et al, 2005) and (Protogeros, 2007). The life-cycle of a virtual enterprise is generally considered to be a four-stage process: creation, operation, evolution and dissolution. Among them, the first step (virtual enterprise creation) involves dynamically generated partnerships and dynamically composed service workflow in order for the successful operation of a virtual enterprise. We consider a virtual enterprise is dynamically created following a process of order generation, partner search and selection, bid negotiation and contract awarding. Workflow is used to define the business process logics that are shared by the participants of a formed virtual enterprise. If we consider the workflow definition as a “class” in programming, a virtual enterprise can be considered as a running instance of such a class which is triggered by customer requirements, created by its lifecycle, controlled by workflow management, executed by workflow engine and dismantled once its goal is fulfilled (Wang 2006). The most important requirements for a virtual enterprise have been identified in (Protogeros,
Doing Business on the Globalised Networked Economy
2005) and are related with the field of accounting practices as follows: •)>>
•)>>
•)>>
Global visibility across the virtual enterprise. Similar to the business field, where there is a need to have an overall visibility on the entire life cycle of the products and/or services produced, starting from its development to its launch into the market, there is an equal need for following the entire path of an accounting operation, as well as its constituent transactions. Such a visibility may be permitted to all the companies’ personnel involved in the virtual enterprise operation from all the participating companies. Uniform and consistent business model. Several researchers define a business process of a virtual enterprise as a set of linked activities that are distributed at member enterprises of the virtual enterprise and collectively realize its common business goal. A uniform business model is very important for the viability of the virtual enterprise. It should support the evolution of the product, process and organisation according to the increasing detail of the attributes representing the same concept (such as the status of an order, the categorization of the order, the customer contact information, the customer account representation, etc.) in a consistent manner. Variable / polymorphic accounting model. In contrast to the abovementioned business model that needs to be characterised by uniformity, the underlying accounting model of a virtual enterprise does not need such a property. Quite in contrast, the fundamental strength of an accounting model to support the needs of a virtual enterprise relates with the ability to support polymorphic and variable accounting practices. Participating members of a transaction do not need to share a common
•)>>
regulatory convention or reporting standard as requested by e.g. the various national authorities involved in the process. This is the case of a hypothetical chemical company that operates in Greece and instead of buying some chemical supplies from Germany, where a very strictly operating chemical supplies market exists with not competitive at all prices and complicated procedures for reporting the transfer of chemical goods to the customer, it decides to buy the same quantity from a supplier in a Latin America country, where no complicated reporting procedures need to take place. In both cases the accounting model from the Greek side is the same but there is an obvious change in the use of the accounting model from the side of the supplier. Consistent process and data model. The data model of the companies can capture various behavioural semantics of the business entities. Thus it is not sufficient to have just a consistent conceptual business model of the business entities for smooth operation (Setrag, 2002). Data semantics and operational behaviour must also be represented and applied consistently.
Assets on the Net: The Case of Open Book Accounting Almost all users of the Internet and the World Wide Web are familiar with the term of open source software. Though it began as a marketing campaign for free software, open source software is the most prominent example of open source development and often compared to user generated content. Furthermore, OSS allows users to use, change, and improve the software, and to redistribute it in modified or unmodified form and is very often developed in a public, collaborative manner. Similar to this well-known term, there is a – (yet) less-known – movement for open-book ac-
1597
Doing Business on the Globalised Networked Economy
counting. This is regarded as an extension of the principles of open-book management to include all stakeholders in an organisation, not merely its employees, and specifically its shareholders (including those whose shareholding is managed indirectly, for example through a mutual fund. This effectively means all members of the public. Since almost all accounting records are now kept in electronic form, and since the computers on which they are held are universally-connected, it should be possible for accounting records to be world-readable. This is an aspiration: at present, organisations run their accounts on systems secured behind firewalls and the release of financial information by publicly-quoted companies is carefully choreographed to ensure that it reaches all participants in the market equally. Nevertheless, price movements before publication of market-sensitive information strongly indicate that insider trading, which is unlawful in most major jurisdictions, had taken place. Advocates of open-book accounting argue that full transparency in accounting will lead to greater accountability and will help rebuild the trust in financial capitalism that has been so badly damaged by recent events such as the collapse of Lehman Brothers, the federal rescues of AIG, Fannie Mae and Freddie Mac, and the fire-sale of Merrill Lynch to Bank of America(Lowenstein; Bebchuk; Claessens et al; Mads; Bardhan; Jenkinson – all in 2008), not to mention earlier scandals such as the collapse of Enron and Worldcom. According to (Walker, 2005), the phrase “open book accounting” does not have a specific meaning. It is rather an expression of intent. That intent is to demonstrate the commitment and confidence of partners in a contractual relationship to share information on income and expenditure. The commitment between commissioners and providers to enter into this way of working needs to be made early in the relationship. This will enable them to describe the type and depth of information to be made available for discus-
1598
sion. The arrangements will depend largely on the nature of the service and length of contract. Arrangement to access accounts should be clear as vagueness could lead to misinterpretation or confusion. For instance the process may be annual and use independently audited accounts or be more frequent using working figures to identify changes to assumptions in order that joint considerations can be given to these. An exception clause could leave the possibility of more frequent dialogue where circumstances can be described (such as legislative changes) that require this from one side or the other. It is a very common finding that virtual organisations can always succeed if both providers and commissioners find their way to a win-win situation. Open book accounting can help provide that. Furthermore, open book accounting also helps develop clarity with commissioners about the differences between independent providers, especially the differences between public authorities and private businesses.
The Interface Is the Message The metaphors and the various conceptual schemes and mental representations that people use for carrying out most types of work tasks and job assignments, spanning from what we call ‘simple’ and ‘everyday’ to those we tend to regard as more abstract or sophisticated, and which work and the learning process in general are part of, have a great significance to the way tasks are carried out and work practices are developed for carrying out these tasks. By the use of such a nonmaterial or intangible culture (Lakoff and Johnson, 1980), which is inherent to any specific job assignment, being able to ‘serve’ it and to sufficiently express its characteristics, it is often possible to improve substantially the way a task is executed, no matter how abstract, complex, detailed or sophisticated may this be. That same nonmaterial or intangible culture also consists of all ideas, values, norms, interaction styles, beliefs and practices that are
Doing Business on the Globalised Networked Economy
used by the members of a Community of Practice (CoP) that relates to a specific domain or field of operation. Currently, accounting suites exploited the proliferation of graphical user interfaces and desktop computing metaphors but did not innovate in terms of creating their own themes or metaphors. This is an area where major improvements and efficiencies can be gained by the selection of an appropriate metaphor. This does not need to take the form of a personification of the corporate accountant in the same way that Microsoft back in 1995 chose Bob. Bob was a Microsoft software product, released in March 1995, which provided a - then - new, nontechnical interface to desktop computing operations. Despite its ambitious nature, Bob failed to meet the market though he lfet a rich legacy in several other Microsoft products till today. Microsoft Bob was designed for Windows 3.1x and Windows 95 and intended to be a user-friendly interface for Microsoft Windows, supplanting the Program Manager. Bob included various office suite programs such as a finance application and a word processor. The user interface was designed to be helpful to novice computer users, but many saw its methods of assistance as too cute and involved. Each action, such as creating a new text document, featured the step-by-step tutorials no matter how many times the user had been through the process; some users considered this to be condescending. Users were assisted by cartoon characters whose appearance was usually vaguely related to the task. In our case, a similar direction would be to choose a cartoon character something like ‘Ronnie the Accountant’, who could be assigned responsibilities for administering accounting operations that will have been preselected by the design time of the particular AIS within a company. This step might be regarded as adding nothing new in comparison to the customisation process of an accounting software suite but this is not the case: by the time that users interact with a – virtual, though - character, the process
of disassociation of certain tasks from the human user to the virtual – computer – character who gets delegated to perform these tasks has started. This is an obviously important transition in the model of operation of the application and, for sure, an irreversible change in the perception and the mental models of the users (just imagine how difficult it is to go back to the command prompt type of interfaces). Ronnie may be a standard character offered to the users for taking control of routine tasks that the user prefers to delegate to an artificial persona than keep as a set of batch processes to carry out on a – say – weekly basis. A further development might include the provision of a set of cartoon characters that might be assigned responsibilities within the corporate accounting department, taking the form of a mixed reality setting where real human personalities interact and collaborate with artificial characters for certain tasks. Since Microsoft’s Bob introduction more than 13 years have passed and the history of interactive computing has seriously been changed since the introduction in 2003 of Second Life, the Internet-based 3D virtual world developed by Linden Research, Inc. Second Life provides a platform for human interaction with a high degree of naturalness. Figure 1. How should Ronnie the accountant look like? What type of tasks should he be able to accomplish on his own or under the company accountants’ guidance and supervision?
1599
Doing Business on the Globalised Networked Economy
A free downloadable client program called the Second Life Viewer enables its users, called “Residents”, to interact with each other through motional avatars, providing an advanced level of a social network service combined with general aspects of a metaverse. Residents can explore, meet other residents, socialize, participate in individual and group activities, and create and trade items (virtual property) and services with one another. According to our opinion, it is only a matter of time to happen the use of Second Life-like dramaturgical and dramatical elements to supply the AIS with new interaction patterns and styles.
IMPLEMENTATION OF AN E-ACCOUNTING PLATFORM The lack of a theory specific to the new forms of computing facilitated by pervasive environments may significantly compromise the pace of the developments in the field, since design and evaluation are not based on a solid basis. Similarly, the e(lectronic), m(obile) and p(ervasive)-commerce field, especially in ubiquitous computing environments may require new theories of accounting to speed up developments. M-accounting is not
simply about supporting companies and ventures through mobile devices, but facilitates a new concept and a novel positioning for the accounting profession in a mobile world. In this context, specific emphasis should be given to further investigate both regulatory and socio-cultural issues related to the acceptability of different accounting application and service concepts. The run-time environment for such a mobile accounting platform needs to have a layered architecture with well defined components, as shown in Figure 2 above. In this architecture, each layer utilizes the services provided by the below layer as well as a layer abstracts the complexities of one service from the above layers. Furthermore, another advantage of the layered architecture is that it facilitates the development of the platform. In the architecture, basically, there are the following components, which of course may vary depending on the specifics of the particular implementation: Peer Controller, for connecting with P2P Infrastructures, which is divided to the following components: •)>>
Semantic Registry component: This component is responsible for the UDDI management by semantically registering
Figure 2. Run-time environment for a mobile accounting application and service architecture
1600
Doing Business on the Globalised Networked Economy
•)>>
accounting or other types of financial services that are provided by the particular corporate peer or are provided by collaborating providers (3rd parties or specialized accountants or auditors). The component uses the UDDI server that is a part of the ERP legacy information system of the Provider. Semantic registry component creates a semantically enhanced UDDI registry where the services of the provider node are published. The registry may uses profiles to create semantic descriptions for the services. The utilization of the component also facilitates the annotation of semantic context for web services utilization. Peer Mediation component: The Peer mediation component act as communication manager between the various accounting service providers. It sends and receives requests for services to other peers. Two peers can communicate by sending request to Peer Mediation components. The Peer Mediator request is encapsulated into standard messages. The definition of the message is based on a common message ontology that will be used from all the provider nodes of the network.
Context Controller consists of the following components: •)>>
Service Request manager: Includes the Computational Model. The Service request manager is responsible for the completion of a client request. It consolidates all necessary accounting information about the client request and the request execution and undertakes the process of execution. The workflow of the execution may be defined into a BPEL or equivalent semantics language file that is executed from the content manager. However in order to support dynamic orchestration the Service
•)>>
•)>>
Request manager uses process templates (semi-structured BPEL files) that are completed and execute with the information retain from the context translation that takes place in the Semantic Context Server and the Policy Controller. It is easy to recognize that a global accounting office with operations all over the world shall need a more powerful context controller, relying on a robust computational model than a small or micro-accounting office that has few clients from the local environment. Semantic context Server is a server that stores and manipulates information regarding the execution of an accounting service request. It will navigate accounting transaction collections with the use of ontologies and will allow the user to combine information for its particular purpose of use (context). The semantic context server is responsible for the translation of the client request message to set of concepts that define the nodes in the execution process. The translation is also based to context server repository that contains ontologies (described e.g. in OWL) for the inference of the semantic interpretation of the client request message context. E-Service Policy controller: It is a supplementary component that contains information regarding the execution of a client request. The E-Service Policy provides to the service Request Manager Information regarding the execution of the requests. This information depends on the service availability and the cost of the service. The Policy controller receives the translated information from the Semantic context and creates an orchestration based on the Policy that is defined by a rule based system. The component retains the optimum orchestration based on rule regarding the cost and the time of the process execution.
1601
Doing Business on the Globalised Networked Economy
Billing Server is responsible for the billing of the accounting services that are executed to the provider registered client or to collaborating providers. The billing server also provides information to the existing billing system of the provider. The billing system is manipulated by two sub-system that are the following: •)>>
•)>>
Client Billing sub-system contains all the information regarding the transaction of the provider client when they use the accounting platform. Collaboration Billing sub-system is responsible for the billing of the collaborating peers billing. This makes increased sense for the case of collaborating networks of accountants that are (usually ad hoc) created for the supply of services to end customers.
Mobile Services Manager handles the client requests. The Mobile Services Manager enables location retrieval mechanisms in order to provide the necessary information to the accounting service request manager. It will focus on Location Based Semantic and Information provider throughout the use of a Positioning Platform Middleware. The architecture of this part of the system will be based on systems that use and provide sufficient contents to exploit, based on location, the semantic information. Though this seems less relevant with the current mainstream of corporate environments, it is our firm belief that it may be used as a major enabler to achieve a higher degree of accounting profession agility in the (near) future. Finally, the Web-Services based wrapper platform that is a set of web services that wrap the provider ERP legacy system functionality and a set of components for administration and maintenance. The platform creates a service oriented infrastructure that is used for the integration of a particular peer to the particular ERP legacy system of the provider. The creation of the web-service is based to a platform independent Framework that
1602
wraps the legacy system of the provider based on semantic conceptualizations. The framework is consisted of software components that develop .NET web services from system conceptualizations. The conceptualization definition is facilitated from a graphical environment. The platform also supports the mechanism for the discovery and invocation of the produced web services.
CONCLUSIONS Accounting services shall definitely constitute the future of our global economies – as they always did and do in all levels all the years but now the difference will be that this dominance of accounting services shall be evident and apparent in all levels of the society and the economy. This means that the emergence of the accounting service science as an independent branch of the accounting discipline that will be taught, studied, researched and examined shall take place. For sure, this is not a novelty: management and (traditional) accounting as well as computers experienced at some point of their lifetime their transformation from a profession towards a science. And of course there is still a part of the people that don’t accept them as such – at the end it is not a matter of taste but a matter of fact what defines something as a science. The problem that we foresee with accounting services is the basis upon which the formation of the scientific foundations will base: accountants have all the good reason to prefer an accounting background on accounting services; same good is the reason for economists and business and / or management science professionals. Finally computer scientists and sociologists can exhibit some grounded reasons for supplying the basis for this ‘new old’ science. Our opinion is sharp-cut: there should be a totally new basis that shall reflect concerns and considerations of all the aforementioned disciplines. Even more: at a great extend, we see the need for introducing an extensive degree of spirituality and
Doing Business on the Globalised Networked Economy
transcendal elements in the accounting service science – though the obvious remark is that this comprises an unscientific practice. The reason for this comes from observation of phenomena that dominate our daily personal and working lives: management does not refer to Taylorism but – more and more – relates to leadership, where the latter term connotes terms like an enlighted leader who – more or less – executes his or her powers in a fashion that is totally unscientific and irrational. Additionally, dramatic elements in the organisation and conduct of business processes are not an innovation at all; the same holds for the ritual aspects that can be found in numerous occasions within the modern corporate and business world. The answer to this is simple: there is a need from the people to satisfy several levels of their lives both as individuals as well as members of an organised – professional or nonprofessional – community and to do so there is a need to introduce transcendal elements that can address parts of the encountered situations in a satisfactory though totally unscientific way. To know how to do this is an extremely serious and scientific aspect that shall more and more be given increased importance by service scientists and service professionals. We should not forget that religion in all its more or less sense-making realisations constitutes an extremely good case for examining service science in an extremely well-defined application area namely this of intangible spirituality. At least, when you pay for an insurance service or a financial service, there are ways to measure the success or the satisfaction of the supplied (intangible) service in terms of some types of (tangible) results. Religion, on the other hand, does constitute an area where the success of the supplied service does not have a tangible equivalent to use as a benchmark. What one should be able to see here is all the dangers that can relate with what we expect to happen in terms of a violent and forceful attack of this new type of (should we call them holistic?) accounting services which shall promise to fulfil
expectations that are not lying at the area of the traditional accounting service as such. Similarly, one can foresee the need for supplying accounting services for literally virtual entities like SecondLife avatars. Speaking about the future of accounting services one may expect forecasts like global accounting infrastructures (that by the way already and since years exist for e.g. the case of accounting firms operating globally). The reader may understand the above terms in many different ways. History can teach us a lot – the difficult part is to show willingness for learning from its many lessons. In a similar fashion, one can view phenomena like the open source and free source software communities as some form of service activisms. The underlying cult for both of them may take extremely powerful forms and drive the software and Net economies in unpredictable directions. The French philosopher Luis Althusser defined a practice as any process of transformation of a determinate product, affected by a determinate human labour, using determinate means (of production). Nowadays that we talk a lot about practices on the Net, in services or e-services, it is tragically timely how much we lack on intellectuals that will be able to transform and process service or technology problems into societal or political ones and vice versa.
REFERENCES Andenas, M. (2008). Who is going to supervise Europe’s financial markets. In M. Adenas & Y. Avgerinos (Eds.), Financial markets in Europe: Towards a single regulator. London: Kluwer Law International. Bardhan, A. (2008). Of subprimes and subsidies: The political economy of the financial crisis. Retrieved October 20, 2008, from http://ssrn.com/ abstract=1270196
1603
Doing Business on the Globalised Networked Economy
Bebchuk, L. A. (2008). A plan for addressing the financial crisis (Harvard Law and Economics Discussion Paper No. 620).
Protogeros, N. (2005). Virtual learning enterprise integration technological and organizational perspectives. Hershey, PA: Idea Group Publishing.
Claessens, S., Kose, M. A., & Terrones, M. (2008). What happens during recessions, crunches and busts? IMF Working Paper.
Protogeros, N. (2007). Agent and Web service technologies in virtual enterprises. Hershey, PA: Information Science Reference.
Coman, N., & Diaconu, P. (2006). The impact of globalization on accounting research. In Proceedings of the International Conference on Business Excellence, ICBE – 2006, Brasov, Romania.
Putnik, G. D., Cunha, M. M., Sousa, R., & Ávila, P. (2005). Virtual enterprise integration: challenges of a new paradigm. In G. D. Putnik & M. M. Cunha (Eds.), Virtual enterprise integration: Technological and organisational perspective. Hershey, PA: Idea Group Publishing.
Diaconu, P., Sr. (2007). Impact of globalization on international accounting harmonization. Retrieved October 17, 2008, from http://papers.ssrn. com/sol3/papers.cfm?abstract_id=958478
Setrag, K. (2002). Web services and virtual learning enterprises. Chicago: Tect.
Jenkinson, N., Penalver, A., & Vause, N. (2008). Financial innovation: What have we learnt? Bank of England Quarterly Bulletin, 2008, Q3.
Walker, N. (2005). Open book accounting – a best value tool. Good practice guides – Commissioning, Change Agent Team.
Lakoff, G., & Johnson, M. (1980). Metaphors we live by. Chicago: Univ. of Chicago Press.
Wang, S., Shen, W., & Hao, Q. (2006). An agentbased Web service workflow model for interenterprise collaboration. In Expert systems with applications. Amsterdam: Elsevier.
Lowenstein, R. (2008, April 27). Triple-a failure. New York Times.
This work was previously published in Social, Managerial, and Organizational Dimensions of Enterprise Information Systems, edited by Maria Manuela Cruz-Cunha, pp. 81-92, copyright 2010 by Information Science Reference (an imprint of IGI Global).
1604
1605
Chapter 7.3
Factors Influencing Information System Flexibility: An Interpretive Flexibility Perspective Ruey-Shun Chen China University of Technology, Taiwan Chia-Ming Sun National Yunlin University of Science & Technology, Taiwan Marilyn M. Helms Dalton State College, USA Wen-Jang (Kenny) Jih Middle Tennessee State University, USA
Abstract
Introduction
Information System (IS) flexibility has been regarded as an important indicator of information technology success. This article provides a model of IS flexibility encompassing all stages of IS implementation and usage. The model considers the cognitive factors from IS staff and users as important leveraging IS flexibility with adaptation activities. A review of constructs extending from the interpretive flexibility perspective in the literature is used to identify these cognitive factors. By hypothesizing the relationships among these cognitive factors, IS flexibility, and adaptation activities several propositions are identified. Empirical testing is then warranted to refute or validate the propositions.
Enhancing information system flexibility with flexible information technology infrastructure and adaptable application systems has been a critical issue for IS managers (Duncan, 1995; Prahalad & Krishnan, 2002; Sambamurthy et al., 2003). Information systems must be flexible to satisfy user requirements, particularly in changing environments. Sufficient IS flexibility could extend the life cycle of information systems and expand the effectiveness of IT investment (Cha-Jan Chang & King, 2005; Chang & King, 2005; Gebauer & Schober, 2006; Moitra & Ganesh, 2005). Truex et al. (1999) found users can never be satisfied in emergent organizations, because their
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Factors Influencing Information System Flexibility
needs are always changing. The user-to-systems relationship, which often experiences continuing conflict, requires application systems flexibility. This viewpoint holds that IS projects should not only focus on design and development activities, but also value the adaptation activities in both implementation and post-implementation stages (Markus et al., 2003; Ross et al., 2003; Truex et al., 1999). As emphasized in prior studies, this calls for considering IS flexibility through the overall life cycle in addition to planning and crafting an infrastructure of IT (Byrd & Turner, 2000; Lewis & Byrd, 2003). Despite a wealth of research on IS flexibility and its impacts on organizations and business processes (Gebauer & Schober, 2006; Moitra & Ganesh, 2005; Sambamurthy et al., 2003), decisions regarding IS flexibility, especially considering the entire IS life cycles, have rarely been included into the analysis. As a result, flexibility guidelines for managing IS have not been developed. In addition, the benefits of IS flexibility are difficult to measure physically or objectively; particularly as they are perceived by both IS staff and system users. IS flexibility to accommodate changes in the supported business processes depends on the various ways staff members combine application functions with business activities (Askenas & Westelius, 2003; Moitra & Ganesh, 2005). Thus, this study uses an interpretive flexibility perspective that differs with most prior studies which consider IS flexibility as being built into IT artifacts through IS design and development activities (Byrd et al., 2004; Byrd & Turner, 2000; Lewis & Byrd, 2003). In addition, this study asserts that the cognitive factors of staff members would influence the decisions and adaptation activities associated with IS flexibility. Flexibility comes at the price of complexity and the additional investment required (Gebauer & Schober, 2006; Stigler, 1939). The decisions regarding IS flexibility are filled with the tradeoffs that need to find a balance between IS rigidity and IS complexity (Gebauer & Schober, 2006; Silver,
1606
1991). IS implementation always involves the risk of failure from rigidity, changing requirements, or too much complexity to maintain. Therefore, we apply the concept of perceived risk from the consumer behavior literature to analyze the perceived requirement for IS flexibility. This perception is related to decisions for lowering or limiting the risk of failures on future IS usage. The perceived risk approach, based on the interpretive flexibility perspective, could be an original and important direction for analyzing IS flexibility. This leads to the following research questions: 1.)>>
2.)>>
What kinds of factors interact to change IS flexibility in each stage of IS adoption, implementation, and post-implementation? How do these factors for IS flexibility differ between IS staff and users in each stage of IS adoption, implementation, and postimplementation?
Theoretical Background First, it is necessary to clarify the content of IS flexibility (for use and for change) and IS adaptation activities (including technology systems and task adaptation) since these are the primary means to alter IS flexibility in actual usage. Subsequently, the related concepts that form the foundation for the theoretical model: perceived risk and interpretive flexibility are reviewed.
Information System Flexibility There are two types of IS flexibility: (1) IS flexibility for use is the range of possibilities provided by an information system until a major change is required, and (2) Flexibility to change or the potential adaptability for further changes of a given information systems (Gebauer & Schober, 2006; Knoll & Jarvenpaa, 1994). Mostly, scholars focus the issue of flexibility to use on the functions of the information system that can be determined by
Factors Influencing Information System Flexibility
designers. All features should not be embedded in the IT products in advance. IT resources allow uses that are not yet envisioned by the developers of the resource (Knoll & Jarvenpaa, 1994). In addition to the decision regarding the level of IS flexibility to use (as the result of the combined choices with functionality, data base and interface), another consideration is IS flexibility. This involves the choices influencing the effort required to change an information system after its initial implementation. Flexibility to change usually plummets as the system life cycle progresses from implementation to maintenance. That is why Duncan (1995) issued the concept of flexible IT infrastructure and identified basic components of IT infrastructures. However, Prahalad and Krishnan (2002) found that the quality of IT infrastructures, especially the flexibility of application systems, has lagged behind needs and became impediments. Many companies are still constrained by legacy infrastructure, including incompatible databases and applications, poor data quality, and restricted scalability on vender’s software modules.
IS Adaptation Activities Although flexibility to use and flexibility to change are properties of IS architecture and applications, are shaped in design stage, their actual impacts are also influenced by following technology adaptation activities in both implementation and post-implementation stages (Tyre & Orlikowski, 1994). These adaptations are major ways to increase user satisfaction and IS flexibility after the adoption stage of IS. Technology adaptation is referred to as the adjustments and changes following the installation of a new technology in a given setting (Tyre & Orlikowski, 1994). Technology adaptation as IS adaptation activities is further grouped into two kinds of IS adaptation activities -- system adaptation and task adaptation (Hong & Kim, 2002). Even with the popularity of packaged software, IS
implementation often requires organizations adapt the application systems and/or processes to fit operations embedded in the software of the adopted information systems. This software modification or adaptation increases the feature-function fit between systems and the adopting organizations. Such system adaptation is categorized into customization, extension, and modification (Brehm et al., 2001). Another kind of adaptation that considers organizational change and the functions of package software is task adaptation. Task adaptation includes process adaptation and operation adaptation. Users change or adjust their processes to fit those embedded in the adopted application systems. Both system adaptation and task adaptation have become the main activities to prevent IS failure, improve the organizational fit of IS usage, and maintain IS flexibility. However, most decisions for IS adaptation activities focus only on user satisfactory and organizational fit in short term, flexibility to use an information system. Each adaptation activity influences the flexibility range of IS today as well as in the future. This viewpoint is often ignored by most stakeholders and leads to the complexity of IS. It is important to clarify the cognitive factors of the staff members that influence the two types of IS flexibility via IS adaptation activities.
Interpretive Flexibility The critical decision factors influencing adaptation approaches are primarily focused on perceived fit, which is popularly applied in technology acceptance model (TAM) in IS research (AmoakoGyampah & Salam, 2004). In addition the proposed model includes cognitive factors based on the interpretive flexibility perspective: perceived flexibility requirement and perceived IS control. From the technical point of view, information systems are considered as ‘engineered artifacts’ expected to do exactly what they are designed for (Orlikowski, 1992). Likewise, organizations are regarded as information processing systems,
1607
Factors Influencing Information System Flexibility
exchanging and handling information based on certain rules (Katzenstein & Lerch, 2000). According to this perspective, the organizations are assumed to have clear requirements and regular usages for IS. In contrast, researchers argue, while information systems do have objective and touchable artifacts, they also have a subjective construct. Orlikowski argued ‘technology is not an external object but a product of ongoing action, design and appropriation’ (1992, p. 400). Included within this perspective of technology is interpretive flexibility, which refers to ‘the degree to which users of a technology are engaged in its construction (physically or socially) during development’ (Orlikowski, 1992:406). The same technology is likely to have different meanings and effects for different users (Pozzebon & Pinsonneault, 2005; Rose & Scheepers, 2001), particularly in the case of configurable software packages that are not developed in-house (Pozzebon & Pinsonneault, 2005). A configurable IS (i.e. Enterprise Planning Systems), designed with many individuals setting parameters to reflect the requirement of different organizational features, and is complex and challenging. People will have varying interpretations about a configurable IS based on their differing levels of knowledge, skill and experience with the systems. In summary, the implementation and adaptation of an application systems could be analyzed as a cognitive process and the embedded IS flexibility is affected by the interpretive flexibility perspective of users and specifies. Although, the concept of interpretive flexibility has been considered in many studies, the detailed constructs and the conceptual definitions of interpretive IS flexibility have not been addressed.
Research Model Figure 1 shows the proposed theoretical model, which posits that the embedded IS flexibility in-
1608
teracts with cognitive factors from staff members as well as from IS adaptation activities in their effects on interpretive IS flexibility. Unlike prior studies considering IS flexibility as embedded in IT artifacts, our proposed model focuses on the cognitive factors of staff members and their influence the adoption decisions for information technology. The model also includes the interaction of the cognitive factors with IS adaptation activities as they affect IS flexibility in the implementation and post-implementation stages. The underlying theories in our model are interpretive flexibility, the technology acceptance model and perceived risk theory. The main constructs of cognitive factors in the model are developed below.
Perceived Fit The theoretical foundation of this study is that individuals’ perceptions about using IT are posited to influence adoption behaviors (Moore & Benbasat, 1991). The technology acceptance model (TAM) is one of the most popular theories in IS dealing with behavioral intentions and usage of IT (Amoako-Gyampah & Salam, 2004). TAM proposes that perceived usefulness and perceived ease of use are most relevant in explaining the behavioral intention to use IS (Davis, 1989). Davis defined perceived usefulness as ‘the degree to which a person believes that using a particular system would enhance his or her job performance’ and defined perceived ease of use as ‘the degree to which a person believes that using a particular system would be free of effort.’ In this study, we applied these two constructs to explain the factors influencing IS flexibility. However, the short-term view of perceived fit on IS usage will lead to the nearsightedness of focusing only on IS flexibility to use. Therefore, in addition to the perceived fit perspective based on TAM, we still include additional cognitive factors that explain when or how staff members recognize the impact of adaptation activities on IS flexibility to change. We define this construct as perceived
Factors Influencing Information System Flexibility
Figure 1. A theoretical model of information system flexibility with interpretive flexibility perspective Is Adaptation Activities technology adaptation
task adaptation
embedded Is flexibility
Interpretive Is flexibility
flexibility to use
flexibility to use
flexibility to change
flexibility to change perceived flexibility requirement perceived fit
cognitive factors for Is flexibility
perceived Is control
flexibility requirement, which is similar TAM and founded on the concept of perceived risk.
Perceived Flexibility Requirement The concept of perceived risk most often used by consumer researchers who define risk in terms of the consumer’s perceptions of the uncertainty and the adverse consequences of buying a product (or service) (Dowling & Staelin, 1994). Just as the properties of products and services can lower perceived risk, the flexibility of an information system avoids usage situations with uncertain consequences that would cause staff members to feel ‘uncomfortable’ or ‘anxious’. In addition, IS differs from general consumer products. The flexibility of information systems can be continuously changed by adaptation activities beyond the adoption stage. Thus, the perceived risk of staff members influences information
systems purchase decisions but also influences their adaptation activities in implementation and post-implementation stages. Consequently, we define the perceived flexibility requirement for IS as ‘the perceived risk of staff members regarding the inflexibility of IS to use or to change with their involvement throughout the IS life cycle’. The perceived risk and involvement of staff members for IS implementation and usage will alter the level of perceived requirement for IS flexibility. And, the perceived flexibility requirement is one of major factors affecting the adoption decisions and the decisions for IS adaptation activities in both implementation and post-implementation stages. Although the flexibility of information systems can be altered somewhat after the adoption stage, it is not unlimited.
1609
Factors Influencing Information System Flexibility
Perceived IS Control
Proposition Development
Information systems not only support the tasks but also control how the work is performed (Askenas & Westelius, 2003). People’s perception of how the system attempts to influence their work is termed perceived IS control. Thus IS control, to a large degree, is subjective. Because the more knowledgeable IS people are crafting the information systems, the easier it is to gain a sense of control (Askenas & Westelius, 2003; Besson & Browe, 2001). In addition, perceived IS control varies among staff members based on their familiarity with IS functionality. At today’s maturity stage of the IT industry with standardization of IT products and the popularity of commercial software packages, one important problem associated with perceived IS control is negotiations concerned with balancing the global development of information infrastructure and the various local contexts they are to operate across (Rolland & Monteiro, 2002). System adaptation strategy is not recommended due to integrity degradation as well as making future upgrade more difficult. But, forces of globalization on IS have led to a new control crisis— the system adaptation. These adaptation activities for local context often slow future maintenance due to the complexity of application systems. In brief, IS unfamiliarity, IS rigidity caused by global standardization, and IS maintenance complexity form the main sources of perceived IS control for both IS staff and users. Perceived IS control has become another factor restraining IS flexibility in each life-cycle stage. Table 1 provides the main constructs, mapped directly onto our research model in Figure 1. Table 1 also lists the key factors and their conceptual definitions for each construct, which are described in previous research and/or are modified by the researchers.
In this section, constructs introduced above to analyze the relationships with IS flexibility and IS adaptation activities are presented. These relationships are analyzed based on the key stages for IS usage: adoption, implementation and postimplementation. Differences on attitudes and behavior of IS staff and users is also provided.
1610
IS Adoption Stage The perceived risk theory predicts consumers engage in risk-reducing activities (i.e., information-search) (Dowling and Staelin, 1994). For IS adoption decisions, this can be achieved by collecting new information about the technology being applied, the available system functions, or the system’s flexibility. Building flexible IT infrastructure and adaptable IS are the most important issues for IS staff and users today. However, the IS flexibility emphasized by IS staff is different than the flexibility emphasized by users. Normally, users focus on flexibility for use because perceived usefulness and perceived ease of use are the main information system criteria that are evaluated. User satisfaction is changing; user needs may unfold rapidly in directions that are not well understood by the users themselves (Truex et al., 1999). The IS requirements of user evolve as organizations or environments change. This calls for the requirement of IS flexibility to change. If staff members interpret short life spans as an IS failure, staff members might develop an IS flexibility to change perspective for long-term ease of maintenance rather than a goal of short-term usage. Who should be responsible for stressing IS flexibility? The perceived risk theory suggests the consumer’s involvement with purchase decisions influence their perception of risk (Dowling & Staelin, 1994; Laroche et al., 2003; Zaichkowsky, 1985). The responsibility for successful IS projects usually rests with the IS staff (Reddy & Reddy,
Factors Influencing Information System Flexibility
Table 1. Conceptual definitions of the constructs in the research model Constructs IS flexibility for use
Dimensions
Conceptual definitions
Previous studies
System functionality
The different features a system provides the users.
(Soh et al., 2000)
Scope of underlying database
The scope of the database support for deploying reports and analyses for decision making.
(Soh et al., 2000)
User interface
The different features and methods an information system provides to users for interaction.
(Soh et al., 2000)
Processing capacity
The capacity provided by an information system before major performance losses are experienced.
(Gebauer & Schober, 2006)
Integration
The integration of data and functionality provided by open system architecture with compatibility of application across platforms.
(Byrd & Turner, 2000; Duncan, 1995)
Modularity
Modularity as provided by the use of reusable software modules, vendor-independent database connectivity, and object-oriented development tools.
(Byrd & Turner, 2000; Duncan, 1995)
Technology skills
The variety of skills and attitudes of the IS staff.
(Byrd et al., 2004; Byrd & Turner, 2000; Duncan, 1995)
Enterprise application integration (EAI)
A middle technology aimed at integrating individual applications into a seamless whole, enabling business processes and data to communicate with one another, across applications, without system customization.
(Hong & Kim, 2002; Soh et al., 2000; Sutherland & Heuvel, 2002)
System customization
The adaptation activities utilizing configuration, extension and modification to fill the gap between IS functionality and organizational requirements.
(Brehm et al., 2001; Hong & Kim, 2002; Soh et al., 2000)
Process adaptation
The implementing organization adapts its processes to fit those embedded in the adopted information systems
(Hong & Kim, 2002; Soh et al., 2000)
Operation adaptation
The adaptation activities of users themselves within the operations and output styles of the adopted information systems.
(Soh et al., 2000)
Perceived flexibility requirement
Involvement
The personal relevance of IS flexibility based on inherent needs, values and interests.
(Laroche et al., 2003; Zaichkowsky, 1985)
Perceived risk
The staff members’ perceptions of the uncertainty and adverse consequences of adopting and using an information system.
(Cox & Rich, 1964; Laroche et al., 2003; Zaichkowsky, 1985)
Perceived fit
Perceived usefulness
The degree to which an individual believes that using a particular system would enhance their job performance.
(Davis, 1989)
Perceived easy to use
The degree to which an individual believes using a particular system would be easy.
(Davis, 1989)
Perceived IS control for users
The user’s perception of how the system is trying to influence their work
(Askenas & Westelius, 2003)
Complexity on maintenance
The complexity and the constraints of maintenance perceived by IS staff about an adopted information system.
(Braa & Hedberg, 2002; Rolland & Monteiro, 2002)
IS flexibility for change
System adaptation
Task adaptation
Perceived IS control
2002). IS staff are involved with IT projects from beginning to the end. Usually, IS effectiveness is considered as a key performance indicator for IS staff. In contrast, the actual user contact with information systems begins after the new system
goes on-line, which is close to the end of IS project. Thus, the involvement of users in the adoption and implementation stages is far behind IS staff. As a result, users hold lower perceived risk on IS
1611
Factors Influencing Information System Flexibility
flexibility than do the IS staff. These arguments yield the first and second research proposition:
adaptation strategy. Collectively these findings suggest the following:
Proposition 1: In the IS adoption stage, if staff members involved in IS decision-making have a higher perceived risk, they will stress higher perceived flexibility requirements, giving more attention to the flexibility properties of the information systems under evaluation.
Proposition 3: In the IS implementation stage, the lower perceived IS fit will result in the application of a system adaptation strategy, by IS staff members, to increase IS flexibility.
Proposition 2: In the IS adoption stage, IS staffs are more deeply involved with IS adoption decisions and IS projects compared with users causing IS staff to have higher perceived risks with IS implementation and usage.
IS Implementation Stage The fit between IS and organizations is positively related to IS implementation success and the fit will improve with high levels of system adaptations (Hong & Kim, 2002). System adaptations can reduce the gap between IS and organization. Thus, when staff members perceive IS fit is low the in implementation stage, they tend to apply a system adaptation strategy to increase the functional fit of IS with user requirements. This result in lower resistance reduced training needs, and less task adaptation (Bingi et al., 1999). System adaptation activities can increase the flexibility of IS, however, the activities may harm IS by limiting future software upgrades, increasing system complexity, and lowering system maintainability (Brehm et al., 2001). Implementation of packaged software requires the organization to adapt its processes to fit the operations embedded in the new IS. In the IS implementation stage, in particular, staff members often lack adequate knowledge about the adopted application software. They often perceive more control from the unfamiliar software than it warrants. This perceived IS control by IS staff influences them or other users to avoid applying a system adaptation strategy and to replace it with a task
1612
Proposition 4: In the IS implementation stage, the higher perceived IS control will mean IS staff members apply the task adaptation strategy to maintain IS flexibility for change.
IS Post-Implementation Stage Users with a better understanding of a system use its functions more efficiently to support their diverse daily operations (Laroche et al., 2003). Usually, the mostly widely used method to improve IS knowledge is via user training. However, user training emphasizes system operations rather than understanding the entire application systems or knowledge on how to applying the system in different situations. Thus, the understanding of a system for users largely develops after the actual usage of IS via the accumulation of experience and situation learning, or the learning and experience curve. Proposition 5: With the accumulation of experience and learning in the post-implementation IS stage, users will increase their IS knowledge. They will experience higher perceived usefulness and higher perceived ease of use. The user’s interpretive IS flexibility will also increase. When an application system is on-line for daily usage, it is not the termination of IS adaptation activities. The post-implementation stage is critical to receiving value from information systems (Markus et al., 2003; Ross et al., 2003). This stage still involves opportunities for process redesign. Thus organizations have on-going requirements to add functionality to the information systems.
Factors Influencing Information System Flexibility
As the business and its environment changes more frequently, the operations or the processes needed by users might change repeatedly. This lowers the perceived fit of IS, resulting in a requirement for more perceived IS flexibility. Proposition 6: When the business or environment changes more frequently, the perceived fit of staff members will decrease and the perceived IS flexibility will increase. The IS adaptation activities will increase to improve the flexibility of the information systems. Many new information systems that replace prior legacy systems now have become another kind of ‘legacy system’ after a period of usage number (Markus et al., 2003; Prahalad and Krishnan, 2002). This phenomenon results not only from technology and innovation but also from organizational changes. Especially, when organizations adopt the system adaptation strategy for process innovation, the complexity and rigidity of IS will rapidly increase and result in higher perceived IS control by staff members. Proposition 7: After long-term adoption of a system adaptation strategy, the complexity and rigidity of IS will increase and cause the higher perceived IS control for staff. The interpretive IS flexibility to change will decrease.
Conclusion This study provides a model of IS flexibility covering the stages of IS life cycles. This model considers the cognitive factors from both IS staff and users as important for leveraging IS flexibility with adaptation activities. A review of constructs based on interpretive flexibility perspective in the literature supports these cognitive factors. The results lead to the hypotheses about the relationships among these factors, IS flexibility, and adaptation activities. The theoretical basis of
the proposed our research model is interpretive flexibility, which was identified in Orlikowski’s (1992) structural model of technology. Technology is a dynamic result of the interaction process between people and organizations. IS adaptation is the process by which staff members try to maximize an application’s perceived fit, thus enhancing their satisfaction with IS. But, perceived fit for the users of the same application system is not stable because many organizations face ever-changing environments. Therefore, a dynamic and continuous view of the perceived IS fit is more appropriate for IS effectiveness today. The proposed model combines two lines of reasoning to extend the current thinking of perceived fit on IS. The first line of reasoning is that the perceived IS flexibility requirement based on perceived risk theory, which considers IS flexibility requirement as a function of various kinds of risks on future IS usage. The other line of reasoning is that perceived IS control originates from limited opportunities offered by the application system. Along with other cognitive factors, perceived IS control also considers an individual’s differentiated experience and subjective knowledge. Consequently, the perceived IS flexibility requirement and perceived IS control could be regarded as two different forces anchoring the balancing process between IS complexity and IS rigidity. After adding these two concepts, the model for interpretive IS flexibility is is more complete and more exhaustive than those identified or proposed prior studies. In summary, this work contributes to the literature in three important ways. First, it extends prior IS flexibility research by highlighting the importance of cognitive factors between the embedded IS flexibility of application systems and IS adaptation activities. These cognitive factors have many effects on both IS flexibility for use and IS flexibility for change. In addition, the research model was designed to develop the propositions considering the temporal dimensions of IS implementation and usage. It points to the
1613
Factors Influencing Information System Flexibility
importance of managing the different levels of cognitive factors of IS staff and users, and proposes how these cognitive factors will change in each IS life-cycle stage. Finally, the model proposes a macro structure for comprehending and analyzing interpretive IS flexibility through the IS life cycle. The model can serve as the basis of developing strategies for reducing the dysfunctional behaviors associated with IS staff and users. Therefore, future empirical study should use the model and its propositions to examine and statistically validate these relationships. The theoretical model is subject to systematic examination based on empirical data. A survey methodology is suggested as appropriate to test the research model. A measurement instrument must first be developed and validated for the research constructs. Primary data should then be collected from a cross-section of a variety of industries. It is also advisable to analyze the data gathered from different cultures, given the important role of cognitive factors. Results of cross-cultural empirical testing increases the generalizability of the theoretical model proposed in this research. The implications of these findings also could be better evaluated in future research through interpretive case studies.
ReferenceS Amoako-Gyampah, K. K., and Salam, A. F. (2004). An extension of the technology acceptance model in an ERP implementation environment. Information & Management, 41(6), 731-745. Askenas, L., and Westelius, A. (2003). Five roles of an information system: A social constructionist approach to analyzing the use of ERP systems. Informing Science, 4(3), 105-113.
1614
Besson, P., and Browe, F. (2001). ERP project dynamics and enacted dialogue: Perceived understanding, perceived leeway and the nature of task-related conflicts. The Data Base for Advances in Information Systems, 32(4), 47-66. Bingi, P., Sharma, M. K., and Godla, J. K. (1999). Critical issues affecting an ERP implementation. Information Systems Management, 16(3), 7-14. Braa, J., and Hedberg, C. (2002). The struggle for district-based health information systems in South Africa. Information Society, 18(2), 113-127. Brehm, L., Heinzl, A., and Markus, M. L. (2001). Tailoring ERP systems: A spectrum of choices and their implications. Paper presented at the Proceedings of the 34th Hawaii International Conference on System Sciences. Byrd, T. A., Lewis, B. R., and Turner, D. E. (2004). The impact of IT personal skills on IS infrastructure and competitive IS. Information Resources Management Journal, 17(2), 38-62. Byrd, T. A., and Turner, D. E. (2000). Measuring the flexibility of information technology infrastructure: Exploratory analysis of a construct. Journal of Management Information Systems, 17(1), 167-208. Cha-Jan Chang, J., and King, W. R. (2005). Measuring the performance of information systems: A functional scorecard. Journal of Management Information Systems, 22(1), 85-115. Chang, C. J., and King, W. R. (2005). Measuring the performance of information systems: A functional scorecard. Journal of Management Information Systems, 22(1), 85-115. Cox, D. F., and Rich, S. U. (1964). Perceived risk and consumer decision-making - the case of the telephone shopping. Journal of Marketing Research, 1(4), 32-39.
Factors Influencing Information System Flexibility
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 318-340. Dowling, G. R., and Staelin, R. (1994). A model of perceived risk and intended risk-handling activity. Journal of Consumer Research, 21(1), 119-135. Duncan, N. B. (1995). Capturing flexibility of information technology infrastructure: A study of resource. Journal of Management Information Systems, 12(2), 37-57. Gebauer, J., and Schober, F. (2006). Information system flexibility and the cost efficiency of business processes. Journal of the Association for Information Systems, 7(3), 122-146. Hong, C.-S., and Kim, Y.-G. (2002). The critical success factors for ERP implementation: An organizational fit perspective. Information & Management, 40(1), 25-40. Katzenstein, G., and Lerch, F. J. (2000). Beneath the surface of organization processes: A social representation, framework for business process redesign. ACM Transactions on Information Systems, 18(4), 383-422. Knoll, K., and Jarvenpaa, S. L. (1994). Information technology alignment or “fit” in highly turbulent environment: The concept of flexibility: SIGCPR Computer Personnel, ACM. Laroche, M., Bergeron, J., and Goutaland, C. (2003). How intangibility affects perceived risk: The moderating role of knowledge and involvement. Journal of Services Marketing, 17(2), 122-140. Lewis, B. R., and Byrd, T. A. (2003). Development of a measure for the information technology infrastructure construct. European Journal of Information Systems, 12(2), 94-108.
Markus, M. L., Axline, S., Petrie, D., and Tanis, C. (2003). Learning from experiences with ERP: Problems encountered and success achieved. In G. Shanks, P. B. Seddon and L. P. Willcocks (Eds.), Second-wave enterprise resource planning systems (pp. 23-55). Cambridge: Cambridge University Press. Moitra, D., and Ganesh, J. (2005). Web services and flexible business processes: Towards the adaptive enterprise. Information & Management, 42(7), 921-933. Moore, G. C., and Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research, 2(3), 192-222. Orlikowski, R. J. (1992). The duality of technology: Rethinking the concepts of technology in organizations. Organization Science, 3(3), 398-427. Pozzebon, M., and Pinsonneault, A. (2005). Challenges in conducting empirical work using structuration theory: Learning from IT research. Organization Studies (Sage Publications Inc.), 26(9), 1353-1376. Prahalad, C. K., and Krishnan, M. S. (2002). The dynamic synchronization of strategy and information technology. MIT Sloan Management Review, 43(4), 24-33. Reddy, S. B., and Reddy, R. (2002). Competitive agility and the challenge of legacy information systems. Industrial Management & Data Systems, 102(1), 5-16. Rolland, K. H., and Monteiro, E. (2002). Balancing the local and the global in infrastructural information systems. Information Society, 18(2), 87-100. Rose, J., and Scheepers, R. (2001). Structuration theory and information systems development; frameworks for practice. Paper presented at the Proceedings of the 9th European Conference on Information Systems, Bled, Slovenia.
1615
Factors Influencing Information System Flexibility
Ross, J. W., Vitale, M. R., and Willcocks, L. P. (2003). The continuing ERP revolution: Sustainable lessons, new modes of delivery. In G. Shanks, P. B. Seddon and L. P. Willcocks (Eds.), Second-wave enterprise resource planning systems (pp. 102-132). Cambridge: Cambridge University Press. Sambamurthy, V., Bharadwaj, A., and Grover, V. (2003). Shaping agility through digital options: Reconceptualizing the role of information technology in contemporary firms. MIS Quarterly, 27(2), 237-263. Silver, M. S. (1991). Systems that support decision makers: Description and analysis. Chichester: Wiley & Sons. Soh, C., Kien, S. S., and Tay-Yap, J. (2000). Cultural fits and misfits: Is ERP a universal solution? Communications of the ACM, 43(4), 47-51.
Stigler, G. (1939). Production and distribution in the short run. Journal of Political Economy, 47(3), 305-327. Sutherland, J., and Heuvel, W.-J. v. d. (2002). Enterprise application integration and complex adaptive systems. Communications of the ACM, 45(10), 59-64. Truex, D. P., Baskerville, R., and Kelin, H. (1999). Growing systems in emergent organizations. Communications of the ACM, 42(8), 117-123. Tyre, M. J., and Orlikowski, w. J. (1994). Windows of opportunity: Temporal patterns of technological adaptation in organizations. Organization Science, 5(1), 98-119. Zaichkowsky, J. L. (1985). Measuring the involvement construct. Journal of Consumer Research, 12(3), 341-352.
This work was previously published in International Journal of Enterprise Information Systems (IJEIS), edited by Madjid Tavana , pp. 32-43, copyright 2009 by IGI Publishing (an imprint of IGI Global).
1616
1617
Chapter 7.4
Challenges in Enterprise Information Systems Implementation: An Empirical Study
Ashim Raj Singla Indian Institute of Foreign Trade, New Delhi, India
ABSTRACT Enterprise Information Systems are the most integrated information systems that cut across various organizations as well as various functional areas. Small and medium enterprises, competitor’s behavior, business partner requirement are the identified and established dimensions that affect these systems. Further it has been observed that such enterprise wide software systems prove to be a failure either in the design or its implementation. A number of reasons contribute in the success or failure of such systems. Enterprise information systems inherently present unique risks due to tightly linked interdependencies of business processes, relational databases, and process reengineering, etc. Knowledge of such DOI: 10.4018/978-1-61520-625-4.ch014
risks is important in design of system and program management as they contribute to success of overall system. In this chapter an attempt has been made to study the design and implementation risks factors for ERP systems in large scale manufacturing organizations. Based on the model used to study ERP risks and thus the findings, various recommendations have been put forward to suggest a strategy so as to mitigate and manage such risks.
INTRODUCTION Enterprise Information systems are a corporate marvel, with a huge impact on both the business and information technology worlds. Organizations today have been talking about Enterprise Resource Solutions as a means of business innovation. They
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Challenges in Enterprise Information Systems Implementation
Figure 1. Vendor sophistication: Number of vendors, vendor quality (source: Frost and Sullivan)
are designed to enhance competitiveness by upgrading an organization’s ability to generate timely and accurate information throughout the enterprise and its supply chain. A successful Enterprise wide information system implementation can shorten production cycles, increases accuracy of demand for materials management & sourcing and leads to inventory reduction because of material management, etc. ERP is the first such enterprise wide product, implementing client server concept and which has changed nature of jobs in all functional areas and provides one of the primary tool for reengineering. The ERP is being used for improving business productivity, streamlining business operations, reducing cost and improving efficiency worldwide. ERP systems act as integrators, bringing multiple systems together under one program and database. Further, CRM help in keeping a better track on customers thereby improving business processes. Whereas, a typical legacy IT systems are composed of multiple software products, each operating discretely, often resulting in conflicting information for an executive to reconcile when determining profitability status and growth strategies. With Sarbanes Oxley and other regulatory requirements, it is
1618
becoming increasingly difficult for utilities to operate or be compliant without full integration across the enterprise. Thus the adoption of ERP is going to increase. Today, India has emerged as the fastest growing IT hub in the world with growth dominated by IT software and services such as Custom Application Development & Maintenance (CADM), System Integration, IT Consulting, Application Management, Infrastructure Management Services, Software testing, Service-oriented architecture and Web services.
The IT intellectual advantage for Indian SMEs India has always been considered as a strong IT destination due to its young tech-savvy English speaking population. When being rated on Human sophistication, India comes just below two nations which are US and UK while on vendor sophistication; we are almost equal to Germany and share way better than China. This is being depicted in the IT Intellectual advantage chart being shown above in figure 1. (Source: Frost & Sullivan, 2006).
Challenges in Enterprise Information Systems Implementation
Hence, the intellectual advantage is all there, but what India needs is that this knowledge should be passed to the grassroots. The challenging Indian mid-market needs an understanding of the advantages of technology as it helps them to integrate with large business supply chain for faster decision making and planned time management. Enterprise applications usage in India began way back in the early 1990’s when the large manufacturing enterprises started adopting ERP to streamline their processes through technology. Soon, the large enterprises from various verticals went into the bandwagon of adopting ERP, CRM and SCM solutions with the increasing complexities in their businesses. Presently, the Small and Medium Enterprises (SME’s) are also adopting these enterprise applications in a big way with the increased growth, awareness and the need to streamline the business processes. This rising demand from the mid-market has attracted the focus of most ERP vendors and has made this segment a very competitive one in the Indian market. According to Frost & Sullivan, the Indian Enterprise Applications software license Market crossed the $100 million mark in 2007. With an expected overall 10-year CAGR of 16.77 percent, the market is estimated to reach $372.05 million in 2015. Further, various other surveys on Indian midmarket Adoption of ERP indicate that: •)>>
•)>>
•)>>
Small and Medium Enterprises have been slow at information technology adoption. Adoption of IT tools low in priority. Feeling is that the same money can be utilized to improve and expand business. Short term focus towards financial gains. Access Market International (AMI) study shows that only 3% of SMEs have a LAN at their office or factories, just 15% have internet connection, 4% have broadband and mere 1% have website. An IDC survey shows that 80% of large enterprises were aware of ERP and this was less than 35% in case of SMEs.
The result of such surveys indicates a slow growth of enterprise wide information systems in small and midsized organizations. So a study to identify challenges faced by such organizations in ERP systems implementation is highly important.
Issues and Challenges in Enterprise Information Systems Implementation Implementing an ERP system is a challenging endeavor as the implementation is complex and resource intensive and there are organizational, operational and cultural issues involved. According to Gray A. Langenwalter (2000), ERP implementation failure rate is from 40% to 60%, yet companies try to implement these systems because they are absolutely essential to responsive planning and communication. The competitive pressure unleashed by the process of globalization is driving implementation of ERP projects in increasingly large numbers, so an methodological framework for dealing with complex problem of evaluating ERP projects is required (AnandTeltumbde, 2000). It has also been found that, unique risks in ERP design arises due to tightly linked interdependencies of business processes, relational databases and process reengineering (Dalal et al., 2004). Similarly, business risks drive from the models, artifacts and processes that are chosen and adopted as a part of implementation and are generated from the firm’s portfolio of MAP’s with respect to their internal consistency and their external match with business partners. Organizational risks derive from the environment – including personnel and organizational structure – in which the system is chosen and implemented (Holsapple et al., 2002). According to Umble & Umble (2002), three main factors that can be held responsible for failure of ERP system are: poor planning or poor management; change in business goals during project; and lack of business management support. In another study, it has been found that companies spent large money in developing ERP systems that are
1619
Challenges in Enterprise Information Systems Implementation
Table 1. Challenges in enterprise information system implementation Risk Factor Users involvement
Discussions Lack of full-time commitment of customers to project management and project activities. MarySumner (2000) Lack of sensitivity to user resistance is a risk. Avrahm Shtub (2001) Users are not adequately involved in ERP Design. Daniel E.O’Leary (2002), Sally Wright and Arnold M. Wright (2002)
Users training
Without adequate training, system can never be used properly, nor can it ever achieve the returns that were projected. Gibson Nicola et al (1999), Daniel E. O’Leary (2002), Sally Wright and Arnold M. Wright (2002).
Process reengineering (BPR)
Failure to redesign business processes. MarySumner (2000) Process Reengineering was required. Sally Wright and Arnold M. Wright (2002) If expert did not expect that process reengineering is required that would force us to question their expertise. Ribbers Schoo (2002) Employees feeling powerless due to downsizing, lack of critical information due to insufficient participation, may lead to BPR failure. PERNILLE KRAEMMERAND, CHARLES MOLLER, HARRY BOER (2003)
ERP lacked adequate control.
Inability to built bridges to legacy applications. Teltumbde Anand (2000) ERP system initially lacked adequate control Umble & Umble (2002) Control risk potential varies by mandatory nature of ERP subsystems. Risk is more aligned with the company’s business operations. Sally Wright and Arnold M. Wright (2002)
System Design
Poorly designed, it did not adequately mirror required processes. Daniel E. O’Leary (2002), Sally Wright and Arnold M. Wright (2002) Failure to adhere to standardized specifications which the software supports. MarySumner (2000) Lack of integration. Avraham Shtub (2001) Every new versions changes processes; System increases variety and fragmentation of processes; Linking ERP to other system increases complexity. Ribbers Schoo (2002).
Vendor Performance
Potential problems arise with third party implementation partners. Vendors may be learning over the new version or deploy collage students on implementation. Theodore Grossman and James Walsh (2004) Vendors may over promise new features in new versions. Organizations should be highly skeptical of these promises because they often turn out to be worthless. Theodore Grossman and James Walsh (2004)
Infrastructure Design
Infrastructure design must be a collaborative effort between client and vendors. Shivraj Shantanu (2000) ERP systems are based on advanced technologies that require replacement of existing infrastructure. It is risky as it requires additional capital. NIV AHITUV, SEEV NEUMANN and MOSHE ZVIRAN (2002)
Implementation is poorly executed.
Allen David (2002)
System does not provide information needed.
System is not task-technology fit. MarySumner (2000)
Data conversion is poorly executed.
Colette Rolland, Naveen Prakash (2001) System designed did not provide information needed for the task. Sally Wright and Arnold M. Wright (2002) Davis, Gorden B. (1986) Teltumbde Anand (2000) Legacy data cannot always be converted for ERP systems. A group must be assigned responsibilities to evaluate the quality and completeness of legacy data. Theodore Grossman and James Walsh (2004)
Budget overrun.
Ribbers Schoo (2002)
Time overrun.
Daniel E. O’Leary (2002)
continued on following page
1620
Challenges in Enterprise Information Systems Implementation
Table 1. continued Risk Factor
Discussions
Skill Mix
Insufficient training and reskilling, insufficient internal expertise, lack of business and technology knowledge, failure to mix internal and external expertise effectively are risk factors. MarySumner (2000)
Management structure and strategy
Lack of top management support, Lack of proper management control structure, Lack of project champions and ineffective communications, etc. Allen David (2002)
Network capacity to allow proper access to ERP system.
Stress testing of network and system must be performed before going live. Umble & Umble (2002)
Security Risks
Some ERP subsystems (Payroll, supply-chain, financial) exhibit greater control and security risks than others. Gibson Nichola et al (1999) Failure to check unauthorized access to information Sally Wright and Arnold M. Wright (2002)
not utilized. It is quite common for ERP project to finish late, cost more than predicted, unreliable and difficult to maintain. Moreover BPR also had a high failure rate with consultants estimating that as many as 70% of the BPR projects fails (Hammer and Champy, 1993). Hammer (1990) advocates that the power of modern technology should be used to radically design business processes to achieve dramatic improvements in their performance. From a software perspective ERP systems is complete. But from the business perspective it is found that software and business processes needs to be aligned, which involves a mixture of business process design and software configurations (Hamer, 1990). So a purely technical approach to ERP system design is insufficient. According to David Allen, Thomas Kern (2002), a careful use of communication and change management procedures is required to handle the often business process reengineering impact of ERP systems which can alleviate some of the problems, but a more fundamental issue of concern is the cost feasibility of system integration, training and user licenses, system utilization, etc. needs to be checked. A design interface with a process plan is an essential part of the system integration process in ERP. By interfacing with a process plan module, a design interface module helps the sequence of individual operations needed for the step-by-step production of a finished product from raw materials (Hwa Gyoo Park).
Similar such contributions for identifying the challenges in enterprise information system implementation are made by various researchers. An attempt has been made to identify such challenges (see Table 1). After summarizing the literature analysis given in table 1 and also based on the discussion with users of ERP, implementation consultants, academicians, etc. a list of risk factors in enterprise information systems implementation was prepared (see Table 2). The knowledge of such risks factors is important in planning and conducting assurance engagements of the reliability of these complex computer systems. But it is also highly important to rank these risk factors in accordance to their importance. So, keeping in view this objective, a study with the objectives to identify and analyze risk factors in planning and design of ERP system was conducted. The study was conducted in two large manufacturing public sector organizations located in northern India. First organization specializes in R&D as well as manufacturing in telecommunication systems and various related equipments. It has developed its enterprise-wide product to streamline the whole system. On the other hand, the second organization is large tractor manufacturing unit and is controlling its own several subsidiary units. One of its subsidiaries is developing mini trucks. This organization is also using its self designed enterprise-wide product.
1621
Challenges in Enterprise Information Systems Implementation
Table 2. Risk factors in implementation of enterprise information system Factor 1. General Risk factors 1. Users are not adequately involved in design. 2. Users are not adequately trained to use the ERP system. 3. Process reengineering is required. 4. ERP system initially lacked adequate control. 5. Poorly designed, it did not adequately mirror required processes. 6. Implementation is poorly executed. 7. System designed did not provide information needed to do the task. 8. Data conversion is poorly executed. 9. Difficulty in executing new processes. Factor 2. Risk factors on the basis of time, cost and other factors. 1. Budget overrun. 2. Time overrun. 3. Lack of benefits. 4. System does not meet business plan criteria. Factor 3. Risk factors encountered while migrating to new system. 1. Time needed for implementation. 2. Technical problems with new version. 3. Bad estimates with migration partners. 4. Cost involved. 5. Quality of migration support. Factor 4. Risk factors from users involved in various phases of ERP system. 1. Inattention to work with the new system. 2. Inadequate training or failure to follow procedures of new system. 3. Not smart enough to understand the system advantages. 4. Are lazy and want to continue working with traditional procedures. 5. Overly perfectionist in their experience. 6. Too difficult to learn and use in reasonable amount of time. 7. Creating political problem with some users by changing political or work distribution in organization. 8. Lack of internal expertise and skillset. 9. Lack of ability to recruit and retain qualified ERP system developers. Factor 5. Risk factors in planning and Requirement analysis. 1. Lack of proper top management support. 2. Lack of Champion and proper project management structure. 3. Failure to redesign business processes. Factor 6. Risk factors in system design of ERP. 1. System is expensive. 2. Increasing variety and fragmentation of processes. 3. Choice of operating system may influence the knowledge and people required. 4. Network capacity to allow proper access to ERP system. 5. Every new versions changes processes. 6. Linking ERP to other system increases complexity. Factor 7. Security issues 1. Too little effort goes into supporting effective use. 2. System is not updated as business needs change. 3. Failure to check unauthorized access to information. 4. Unauthorized use of access codes and financial passwords. 5. Theft by entering fraudulent transaction data. 6. Theft by stealing or modifying data. Factor 8. Other risk factors. 1. Attempt in industry to steal business secrets. 2. Extensive reliance on computer and internet connectivity. 3. Large R&D effort as percentage of revenues. 4. National / International media profile. 5. Operator error while executing transaction. 6. Software bugs or data errors
1622
Challenges in Enterprise Information Systems Implementation
Research Methodology Adopted 1. )>> Secondary data for research was collected from related books, publication, annual reports, and records of organization under study. 2. )>> Primary data has been collected through questionnaire-cum-interview technique. For this purpose questionnaire was created on already established models and survey of literature. Questionnaire was first pre-tested on 20 managers from the actual sample to be interviewed for checking its reliability and content validity. Cronbach’s alpha test was applied, where the value of coefficient was 0.87. Thus pre-tested & modified questionnaire was administered to all the sampled respondents. For developing design model, the managers in the EDP departments of each of the selected organizations were interviewed. Where as for the purpose of developing implementation model all the managers in the EDP departments along with an appropriate sample of the managers at the three levels of the management of each of the selected organizations were selected. The sample of the randomly selected managers was proportionate and statistically sound that represents the universe of managers of the selected organizations. 115 experienced respondents from both the companies who specialize in their related production skills and use of ERP system had participated in the study. 3. )>> Data was collected on 5 point Likert scale depending on the relative importance of a factor. Finally the factor analysis is applied to analyze the data to arrive useful conclusions using SPSS package. Table 6 and 7 represents the rotated factor matrix, i.e. the final statistics, for risk factors in ERP implementation and risk factors in ERP design and planning. All the factor loadings that are greater than 0.45 (ignoring the sign)
have been considered for further analysis. The 34 variables of these tables were then loaded on 8 factors (see table 3). In order to find out as to which of the above given factors rank as the most satisfying/dissatisfying for ERP implementation, the factor wise average score (from 5 pt. Likert scale) was calculated. The satisfaction level of the factors on the basis of factor-wise average scores has been categorized as given below (see Table 4). The naming of the factors has been done on the basis of the size of the factor loadings for respective variables. Greater the factor loading, greater are the chances of the factor being named after these variables (see Table 3).
RESULTS AND DISCUSSIONS The thus collected data was subjected to various types of statistical methods to analyze the risk factors in ERP system design and implementation. Table 3 lists the score for each of the 5 risk factors in implementation and 3 risk factors in ERP system design. Table 4 gives the ranking of these implementation risk factors as calculated by factor analysis. The rotated component matrix of factor analysis for both the classes of implementation risk factors is given respectively in Table No. 6 and 7. An analysis of Table 3 reveals that many of these ERP implementation problems revolve around inadequate training, non involvement of users in the new system, project management issues, ineffective communication with users, implementation constraints, problems in process design and migration problems, etc. The analysis of table 4 reveals that project management issues (4.29) at various phases of ERP implementation is most important risk factor. It is important in accomplishing project objectives and aligning the whole system as per new implementation plan. It is also found that after the BPR is applied; jobs and responsibilities of users working in a particular system got
1623
Challenges in Enterprise Information Systems Implementation
Table 3. Average score of implementation risk factors as tested in both organizations (N=115) Risk factors in ERP system Implementation. Factor/
Issues)>>
1
Rating Implementation Constraints (3.876)
After going live (3.4375)
ERP Software (3.1325)
System Migration (3.25)
Project Management (4.29)
Frequency 2
3
4
5
Avg
Process reengineering is required.
0
7
27
53
28
3.89
Budget overrun.
0
1
12
43
59
4.39
Time overrun.
0
6
18
52
39
4.08
Inattention to work with the new system.
0
7
35
55
18
3.73
Not smart enough to understand the system advantages
4
24
36
37
14
3.29
Bad estimates with migration partners.
1
32
50
26
6
3.03
Failure to check unauthorized access to information.
0
7
53
35
20
3.59
Extensive reliance on computer and internet connectivity.
1
26
53
23
12
3.17
Large R&D effort as percentage of revenues.
0
1
35
47
32
3.96
Too difficult to learn and use in reasonable amount of time
0
37
46
25
5
2.95
Theft by entering fraudulent transaction data.
0
21
66
22
6
3.11
Operator error while executing transaction.
0
19
61
35
0
3.14
Software bugs or data errors
2
13
49
47
4
3.33
Data conversion is poorly executed.
0
14
42
49
10
3.48
Difficulty in executing new processes.
0
7
50
43
15
3.57
National / International media profile.
8
32
61
14
0
2.70
Users are not adequately trained to use the ERP system
0
0
24
47
44
4.17
Implementation is poorly executed.
0
5
14
58
38
4.12
Creating political problem with some users by changing political or work distribution in organization.
0
1
15
36
63
4.40
Lack of internal expertise and skill set.
0
0
15
30
70
4.48
Users are not adequately involved in design.
5
15
37
37
21
3.47
ERP system initially lacked adequate control.
0
32
32
33
18
3.32
Poorly designed, it did not adequately mirror required processes.
2
12
39
36
26
3.63
Technical problems with new version.
4
3
23
58
27
3.88
Choice of operating system may influence the knowledge and people required.
18
36
39
12
10
2.65
Extensive reliance on computer and internet connectivity.
1
26
53
23
12
3.17
Lack of proper top management support.
0
0
8
49
58
4.43
Lack of Champion and proper project management structure.
0
7
34
44
30
3.84
Failure to redesign business processes.
0
7
28
39
41
3.99
Network capacity to allow proper access to ERP system.
1
52
35
24
3
2.79
System does not meet business plan criteria.
5
23
42
41
4
3.14
Increasing variety and fragmentation of processes
0
4
4
54
53
4.36
Every new versions changes processes.
0
22
25
47
21
3.58
Large R&D effort as percentage of revenues.
0
1
35
47
32
3.96
Risk factors in ERP system planning and design. System Design (3.35)
Implementation team (3.76)
Business process design (3.76)
1624
Challenges in Enterprise Information Systems Implementation
Table 4. Ranking of ERP system risk factors as tested in both organizations (N=115) Implementation Risk Factors Risk Factors
Score
Ranking
Project Management
4.29
1
Implementation constraints
3.87
2
Business Process design
3.76
3
Implementation team
3.76
4
After going live
3.43
5
System design
3.35
6
System migration
3.25
7
ERP Software
3.13
8
changed which lead to political problems among employees. More over implementation of ERP creates fear in minds of the employees (users) of loosing their jobs; as the implementation of ERP
system leads in downsizing the number of jobs required in different business processes. Similarly lack of experts; application specific knowledge; user experience and problem to recruit and retain good ERP specialists; also contribute to project risk. Followed by Project management issues are the risk factors arising due to implementation constraints (3.87), like budget overrun, time overrun and BPR, etc is ranked 2nd. Similarly the problems in business process design (3.76) and system design (3.35) are ranked 3rd and 6th according to the study. In system design approach, less emphasis may be placed on the technical aspects of software development, but there is a need to balance the business process design, software configuration and project management aspects of ERP implementation with the overall strategy and structure of the organization. Project needs to be based on enterprise-wide design and we
Table 5. Recommendations to reduce/control risks factors in ERP Projects Risks
Strategies for ERP risk management
User involvement and training
- Communicating users about project strategy and objectives - Classifying users on the basis of their experience & skill set and assigning them appropriate roles. - Total user involvement and commitment. - Effectual training strategy for different roles. - Upgrading existing skillset to meet the requirements.
Project management
- Strategies for recruiting and retaining technical personnel - Model based implementation strategy. - Check on budget and time requirements at each stage of project. - Attain top management commitment to redesign business processes - Defining levels and hierarchies in organization for effective decision making.
Technology Implementation
- Strategy for migration and data conversion from existing legacy system to newly designed system. - Maintain log of software bugs and transaction failures. - Simple training strategy for even very complicated modules. - Effective use of existing IT infrastructure
System Design
- Design should check process fragmentation. - Strategy for migrating from existing software version to new version. - Design should meet business plan criteria and adequately mirror required processes. - Use Object- and component- oriented techniques in system design. - COM/SOM/OMA/OMG object models can be used make object definition independent of programming language. - System architecture should support partitioning of application on different computers.
Integration and Technology planning
- Carefully select operating system and network capacity keeping in mind the user requirement and transaction loads. - Record of technical problems with new system. - Client-Server / Distributed Network implementation based on object communication. - Object communication can be implemented through CORBA or OLE.
1625
Challenges in Enterprise Information Systems Implementation
Table 6. Rotated component matrix for risk factors in ERP system implementation as tested in both organizations. (N=115) Issues
Component 1
2
3
4
5
Users are not adequately trained to use the ERP system
.455
.299
-1.394E-02
3.412E-03
.666
Process reengineering is required.
.797
.243
8.175E-02
.170
.178
Implementation is poorly executed.
.464
4.914E-02
.202
.364
.485
Data conversion is poorly executed.
.269
4.749E-02
.312
.642
.340
Difficulty in executing new processes.
.101
-1.597E-03
.279
.765
.168
Budget overrun.
.683
-.121
.280
-3.843E-02
.259
Time overrun.
.763
-.247
.274
.242
.109
Bad estimates with migration partners.
.272
.589
6.407E-03
.472
.130
Quality of migration support.
.449
.454
.411
.173
-4.345E-02
Inattention to work with the new system.
.655
.492
.122
1.364E-02
3.242E-02
Not smart enough to understand the system advantages.
.678
.360
.310
.340
1.601E-02
Too difficult to learn and use in reasonable amount of time.
.208
.460
.599
.238
-6.862E-02
Creating political problem with some users by changing political or work distribution in organization.
-8.265E-03
.273
-.107
.246
.795
Lack of internal expertise and skillset.
.222
1.028E-02
.446
4.712E-02
.658
Failure to check unauthorized access to information.
-8.165E-02
.688
.243
.120
.334
Theft by entering fraudulent transaction data.
9.981E-02
.437
.500
.382
.122
Extensive reliance on computer and internet connectivity.
8.267E-02
.750
-.110
.443
8.689E-02
Large R&D effort as percentage of revenues.
6.529E-02
.759
.286
-.105
.190
National / International media profile.
8.454E-02
.324
3.378E-02
.682
-2.287E-02
Operator error while executing transaction.
.221
.283
.808
.164
.122
Software bugs or data errors
.304
-4.219E-02
.807
.119
7.500E-02
must define what is needed at the enterprise-wide level and then apply it to the business unit level. Similarly if a system design does not adequately mirror required processes, does not meet business plan and may also leads to increased variety and fragmentation of processes, than it becomes a potential risk factor. Risk factors in system design also include software factors like lack of discipline and standardization of code, developing wrong functions and wrong user interface, continuous stream of changes, lack of effective methodology and poor estimation which can lead to both cost and time overruns.
1626
It is also found that lack of support at various management levels and problems in building a right implementation team (3.76) is ranked as 4th important risk factor. A improper management structure can lead to a decentralized system and thus created excessive duplication of effort. Further the presence of many people working at same level also is a reason of conflicts among their views and non resolution of problems. Similarly ERP implementation team must include right mix of people with functional & technical knowledge with best of the peoples available in the organization. Generally, it is very difficult to hire and retain
Challenges in Enterprise Information Systems Implementation
Table 7. Rotated component matrix for risk factors in ERP system design as tested in both organizations (N=115) Issues
Component
Users are not adequately involved in design.
1
2
3
.715
-.184
.431
ERP system initially lacked adequate control.
.855
.128
.269
Poorly designed, it did not adequately mirror required processes.
.798
3.250E-02
4.444E-02
System does not meet business plan criteria.
.516
.177
.554
Technical problems with new version.
.681
-.106
.565
Lack of proper top management support.
.141
.613
.315
Lack of Champion and proper project management structure.
-2.059E-03
.875
6.089E-04
Failure to redesign business processes.
-5.239E-03
.883
.135
Increasing variety and fragmentation of processes
7.265E-02
.165
.880
Choice of operating system may influence the knowledge and people required.
.660
.486
-.170
Network capacity to allow proper access to ERP system.
.424
.648
.255
Every new versions changes processes.
7.803E-02
.457
.636
Extensive reliance on computer and internet connectivity.
.735
.240
7.804E-02
Large R&D effort as percentage of revenues.
.509
.133
.509
good ERP professionals. Similarly in the After going live phase with new system (3.43), common problems include failure to check unauthorized access, extensive reliance on computer and internet connectivity, system migration (3.25), errors in data conversion and difficulty in executing new processes are the common risk factors. According to the study, problems in the ERP software (3.13), is ranked lowest in the list. Some times there are few software bugs which can create transaction errors. Also it is quite possible that the software is too difficult to learn and use in reasonable amount of time. Risk factors in terms of Inadequate technical infrastructure i.e. need for new hardware and software on the basis of application size (project scope, number of users), technical complexity and links to existing legacy system, etc. needs to be tackled.
RECOMMENDATIONS It emanates from the above discussions that ERP system links together an organizational strategy, structure and business processes with the IT systems. It is important to communicate what is happening, including the scope, objectives and activities of the ERP project. Similarly appropriate staffing and personnel shortfalls need to be checked carefully. In case of inadequate internal expertise, it is recommended that organization should hire consultants for technical and procedural challenges in design and implementation of specific application modules. Instead of large number of programmers with average skills, it would be better to have few business consultants those who have specialized expertise in their specific domains. Similarly, it is very important to create a perfect project management plan keeping in mind the project size & structure, skill set available, experience of the company, etc. to avoid
1627
Challenges in Enterprise Information Systems Implementation
high project cost and time overruns. The various strategies for mitigating as well as for managing ERP risks have been listed in Table 5. To conclude, it may be emphasized that user involvement, user training, project management, technology implementation, system design, integration and technology planning are the key issues which are the critical risk factors in ERP design and implementation. A meticulous planning of these issues would surely help in smooth and successful implementation of ERP systems in any organization in general and in the selected organization in particular.
References Ahituv, N., Neumann, S., Zviran, M. (2002). A system development methodology for ERP systems. Journal of Computer Information Systems. Allen, D., Kern, T., & Havenhand, M. (2002). ERP Critical Success Factors: an exploration of the contextual factors in public sector institutions. In Proceedings of 35th Annual Hawaii International Conference on System Sciences. Avraham, S. (2001). A framework for teaching and training in the ERP era. International Journal of Production Research, 3, 567–576. Boynton, A. C., & Zmud, R. W. (1984). An Assessment of Critical Success Factors. Sloan Management Review. University of North Carolina. Colette, R., & Naveen, P. (2001). Matching ERP system functionality to customer requirement. IEEE, 1090-705X. Davis, G. B. (1986). An empirical study of the impact of user involvement on system usage and information satisfaction. Communications of the ACM, 29(3), 232–238. doi:10.1145/5666.5669 Franch, X., Illa, X., & Antoni, P. J. (2000). Formalising ERP selection criteria. Washington, DC: IEEE.
1628
Gibson, N., Light, B., & Holland, C. P. (1999). Enterprise Resource Planning: A Business Approach to Systems Development. In Proceedings of 32nd Hawaii International Conference on System Sciences. Goyal, D. P. (2000). Management Information System Managerial Perspective. New Delhi: Macmillan India Ltd. Grossman, T., Walsh, J. (2004). Avoiding the pitfalls of ERP system implementation. Information systems management. Hammer, M., & Champy, J. (1994). Reengineering the Corporation. London: Nicholas Brealy Publishing. Hwa, G. P. (1999). Framework of Design interface module in ERP. In 1999 IEEE International Symposium on assembly and task planning. Kamna, M., & Goyal, D. P. (2001). Information Systems Effectiveness – An Integrated Approach. IEEE Engineering and Management Conference (IEMC’01) Proceedings on Change Management and the New Industrial Revolution, (pp. 189-194), Albany, NY. Kraemmerand, P., Moller, C., & Boer, H. (2003). ERP implementation: an integrated process of radical change and continous learning. Production Planning and Control, 14(4), 38–348. doi:10.1080/0953728031000117959 Legare, T. L. (2002). The Role of Organizational Factors in Realizing ERP Benefits. Information system management. O’Leary, D.E. (2002). Information system assurance for ERP systems: Unique Risk Considerations. Journal of Information systems, 16, 115-126. Ribbers, S. (2002). Program management and complexity of ERP Implementation. Engineering management Journal, 14(2).
Challenges in Enterprise Information Systems Implementation
Saiu, K., & Messersmith, J. (2002). Enabling Technologies for E-Comm and ERP integration. Quarterly Journal of Electronic Commerce, 3, 43–52. Scheer, W. & Habermann, F. (2000). Making ERP a success. ACM, 43(4). Shivraj, S. (2000). Understanding user participation and involvement in ERP use. Journal of Management Research, 1(1). Sumner, M. (2000). Risk factors in enterprise-wide/ERP projects. Journal of Information Technology, 15, 317–327. doi:10.1080/02683960010009079
Teltumbde, A. (2000). A framework for evaluating IS projects. International Journal of Production Research, 38(17). Umble, E. J. & Umble, M.M. (2002). Avoiding ERP implementation Failure. Industrial Management. Wright, S., & Wright, A. M. (2002). Information system assurance for Enterprise Resource planning systems: unique risk considerations. Journal of Information Science, 16, 99–113.
1629
Challenges in Enterprise Information Systems Implementation
Appendix Extraction Method: Principal Component Analysis, Rotation Method: Varimax with Kaiser Normalization. Rotation converged in 14 iterations. Extraction Method: Principal Component Analysis.Rotation Method: Varimax with Kaiser Normalization. Rotation converged in 8 iterations.
This work was previously published in Enterprise Information Systems and Implementing IT Infrastructures: Challenges and Issues, edited by S. Parthasarathy, pp. 195-209, copyright 2010 by Information Science Reference (an imprint of IGI Global).
1630
1631
Chapter 7.5
A Grounded Theory Study of Enterprise Systems Implementation: Lessons Learned from the Irish Health Services John Loonam Dublin City University, Ireland Joe McDonagh University of Dublin, Trinity College, Ireland
Abstract Enterprise systems (ES) promise to integrate all information flowing across the organisation. They claim to lay redundant many of the integration challenges associated with legacy systems, bring greater competitive advantages to the firm, and assist organisations to compete globally. However, despite such promises these systems are experiencing significant implementation challenges. The ES literature, particularly studies on critical success factors, point to top management support as a fundamental prerequisite for ensuring implementation success. Yet, the literature remains rather opaque, lacking an empirical understanding of how top management support ES implementation. As a DOI: 10.4018/978-1-60566-040-0.ch003
result, this study seeks to explore this research question. With a lack of empirical knowledge about the topic, a grounded theory methodology was adopted. Such a methodology allows the investigator to explore the topic by grounding the inquiry in raw data. The Irish health system was taken as the organisational context, with their ES initiative one of the largest implementations in Western Europe.
Introduction The objective of this chapter is to discuss the application of a Grounded Theory Methodology (GTM) in addressing “enterprise systems (ES) implementation within the Irish health services.” In particular, this chapter will be of benefit to
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Grounded Theory Study of Enterprise Systems Implementation
researchers and practitioners endeavouring to focus on ERP implementation within health care organisations. There is a lack of empirical literature to explain the application of GTM for IS inquiry within health care organisations. This empirical vacuum is even more lacking when we turn our attention to ES implementations. This chapter will be comprised of two main sections. The first section will “introduce GTM,” clearly illustrating its benefits to IS inquiry. The second section will provide the reader with an “application of GTM in practice.” The chapter will conclude with reflections on GTM as an approach for IS empiricism.
Investigative Focus Since the 1950s, organisations have sought to increase effectiveness and efficiency through the use of computers. During the 1970s and 1980s, IS was recognised as a means to creating greater competitive advantages for implementing organisations. Today, IS has permeated to the very core of many firms, often determining their success, or indeed failure. As a consequence of the escalated growth and interest in IS, huge emphasises has been placed on integrating various systems throughout the organisation. Such integration gives the organisation a single view of the enterprise. To this end, enterprise systems (ES) began to emerge in the early 1990s. These systems promised “seamless integration” of organisations business processes, throughout its value chain (Davenport, 2000). In other words, these systems allow the organisation to unite all its business processes under the umbrella of a single system. According to Parr and Shanks (2000), ES’s have two important features, firstly they facilitate a casual connection between a visual model of business processes and the software implementation of those processes, and secondly they ensure a level of integration, data integrity and security, which is not easily achievable with multiple software
1632
platforms. Kraemmergaard and Moller (2002, p. 2) note that the real benefits of ES are their potential to integrate beyond the organisations own value chain, delivering interenterprise integration. This form of integration allows a single organisation to integrate with customers and suppliers along its value chain and to other organisations with similar areas of interest (Davenport, 2000). Finally, Brown and Vessey (1999, p. 411) believe that ES implementations provide “total solutions” to an organisation’s information systems needs by addressing a large proportion of business functions. These “off the shelf” packages allow an organisation to improve their current business processes and adopt new best practices (Al-Mashari, 2000). Consequently, many organisations have moved to implement enterprise systems. The adoption of these systems is expected to bring significant benefits to the organisation. The case literature illustrates this point, with the Toro Co. saving $10 million annually due to inventory reductions, while Owens Corning claims that their ES software helped it to save $50 million in logistics, materials management, and sourcing (Umble, Haft, & Umble, 2003, p. 244). Similarly, other cases reveal large savings in costs and increased levels of organisational effectiveness after ES implementation. Companies such as Geneva Pharmaceuticals (Bhattacherjee, 2000), Lucent Technologies (Francesconi, 1998), Farmland Industries (Jesitus, 1998), and Digital Equipment Corporation (Bancroft, Seip, & Sprengel, 1998), have had significant reductions in costs and increased organisational performance as a result of ES adoptions. The literature highlights notable advantages and benefits for the implementation of these systems. However, despite such promises, implementation results revealed significant failures. Some studies have cited up to 90% failure rates for the implementation of such systems (Sammon, Adam, & Elichirigoity, 2001). Considering that these systems are enterprise-wide, can often cost
A Grounded Theory Study of Enterprise Systems Implementation
millions of euro and take between 1-3 years to install (Markus & Tanis, 2000c), their failure can have devastating consequences on an organisation. For example, in the case of FoxMeyer Drug, a $5 billion pharmaceutical company, the firm filled for bankruptcy after major problems were generated by a failed enterprise system (Helm, Hall, & Hall, 2001). It, therefore, becomes important to ask why: why are ES implementations delivering such poor performances, and in some cases complete failure. According to Sammon et al. (2001, p. 2), “the main implementation risks associated with ES initiatives are related to change management and the business reengineering that results from integrating an organisation.” According to a study by Deloitte and Touche (2001), the main reasons for poor ES performance after implementation range from people obstacles (which according to the study contributed to 68% of the problem), to business processes (which were at 16%), and information systems issues (which were at 12%) (cited in Crowe, Zayas-Castro, & Vanichsenee, 2002, p. 3). As a consequence, empirical investigations have focused on factors that are critical to ES success, that is, critical success factors (CSF). In particular, top management support was identified as the most important factor for ensuring success (Somers & Nelson, 2001; Umble et al., 2003). However, there was a lack of empirical depth within the literature as to how top management support ES implementation. An empirical inquiry was called for. This research, therefore, sought to investigate the topic within the Irish health services. In conducting such a study, the investigation would need to build an empirical explanation, that is, theory. However, before building any theory, the investigator must first align their philosophical views with the demands of the research question. The outcome will decide the methodological choices available to the investigator and the investigation.
A Call for Interpretivism Before discussing the merits and application of a grounded theory methodology, it is first important to provide the reader with a preliminary review about the nature of empirical inquiry. Prior to commencing any inquiry, the investigator must address their ontological and epistemological views and indeed how such views fit with their research question and the methodological choices made to conduct the study. Essentially, ontology describes the reality, while epistemology is the relationship between reality and the investigator, with the methodology being the technique used to discover such reality (Perry, Riege, & Brown, 1999). Hirschheim (2002) provides an historical overview of the relationship between the investigator and reality since the 17th Century, that is, an explanation of the epistemological status of science. In effect, two epistemological paradigms have preoccupied scientists’ view of reality, that is, the positivist and interpretivist perspectives. Briefly explained, the positivist perspective is based on the ontological assumption that there exists an objective social reality that can be studied independently of the action of the human actors in this reality. “The epistemological assumption following from this is that there exist unidirectional cause-effect relationships that can be identified and tested through the use of hypothetic-deductive logic and analysis” (Khazanchi & Munkvold, 2000, p. 34). In contrast, the interpretivist perspective is based upon the ontological assumption that reality and our knowledge thereof are social constructions, incapable of being studied independent of the social actors that construct and make sense of reality. Instead of seeking unidirectional causeeffect relationships, the focus is to understand the actors’ view of their social world (Khazanchi & Munkvold, 2000, p. 34). As this study seeks to “explore” how top management support ES implementation, an interpretivist epistemology is required. This investigation
1633
A Grounded Theory Study of Enterprise Systems Implementation
focuses on understanding an enterprise system within a specific organisational setting. Skok and Legge (2002, p. 74) note that when examining any ES project, the situation becomes complex because of the variety of associated stakeholders and the inter-relationships between them. In such a world of multiple perspectives and “interpretations,” there is a need to provide an abstraction of the situation. In other words, it is important that the investigator remain open to interpretation when exploring the nature of enterprise systems. However, multiple perspectives are not only applicable to enterprise systems but indeed to all information systems. Greatbatch, Murphy, and Dingwall (2001, p. 181), for example, note that “many of the social processes which surround the implementation of computer systems over time in real-world settings are not amenable to quantitative (or positivistic) analysis because they resist treatment as discrete, static entities.” In order to remain open to interpretation, it is critical that the investigator adopts a suitable methodological approach. Choice of methodology is important as it needs to align with the empirical orientation of the study, that is, for this investigation that is interpretivism, while also providing the investigator with appropriate tools and techniques to collect and analyse data. As many interpretivist inquiries seek to explore a new topic, current empirical knowledge is often limited. Therefore, it is important that the investigator chooses a methodology that remains flexible during theory building. Such methodological approaches must first start research with a “considerable degree of openness to the field data” (Walsham, 1995, p. 76). Eisenhardt (1989, p. 536), in dealing with case studies, states that attempting to approach this ideal is important because preordained theoretical perspectives or propositions may bias and limit the findings. Investigators should formulate a research problem and possibly specify some potentially important variables, however they should avoid thinking about specific relation-
1634
ships between variables and theories as much as possible, especially at the outset of the process. Walsham reiterates the importance of avoiding formal theoretical predispositions, stating that while “theory can provide a valuable initial guide, there is a danger of the researcher only seeing what the theory suggests, and thus using the theory in a rigid way which stifles potential new issues and avenues of exploration” (199, p. 76). It, therefore, becomes important for the investigator to choose a methodology that remains flexible during data collection and analysis. For this investigation, a grounded theory methodology (GTM) offered such an approach. The objective of GTM is to ground data that are systematically gathered and analysed. The theory emerges during the research process and is a product of the continuous interplay between analysis and data collection (Glaser & Strauss, 1967).
A Grounded Perspective The grounded theory methodology (GTM) first emerged in the 1960s with Glaser and Strauss publishing studies based on the method. In 1967 both authors published an extended exposition of the method and this text1 of origin has remained the main reference point for researchers working with GTM. Its origins lie with the principles of symbolic interactionism, which basically prescribe to the belief that researchers must enter the world of their subjects in order to understand the subjects’ environment and the interactions and interpretations that occur. Symbolic interactionism is both a theory of human behaviour and an approach to enquiry about human conduct and group behaviour. A principle tenet is that humans come to understand social definitions through the socialisation process (Goulding, 2002). Using the principles of symbolic interactionism, Glaser and Strauss set out to develop a more systematic procedure for collecting and analysing qualitative data (Goulding, 2002). The basic
A Grounded Theory Study of Enterprise Systems Implementation
premise of grounded theory grew largely out of the protest against the methodological climate in which the role of qualitative research was viewed as preliminary to the “real” methodologies of quantitative research (Charmaz, 1983). Grounded theory therefore was intended as a methodology for developing theory that is grounded in data that are systematically gathered and analysed. The theory evolves during the research process and is a product of continuous interplay between analysis and data collection (Strauss & Corbin, 1990). The method is most commonly used to generate theory where little is already known, or to provide a fresh slant on existing knowledge (Turner, 1983). This is the fundamental difference between GTM and quantitative methods of inquiry, which normally aims to “explore” (interpretivism) rather than to “test” (positivism) theory. Glaser and Strauss saw this imbalance between “theory generation and verification” and set about developing processes for theory generation as opposed to theory generated by logical deduction from a priori assumptions (Bryant, 2002, p. 28). This gave rise to the grounded theory method. The fundamental objective of GTM is to develop theory from exploratory research. One of the major challenges, however, facing GTM is also experienced by other qualitative approaches, namely the dominance of positivism within general research studies. This is particularly evident within IS research, where the majority of studies have focused upon positivistic criteria that aim to test theory. This, of course, is a result of how IS has been viewed within practice, and unfortunately academia, that is, as a separate technical systemsbased entity. However, many recent IS studies have illustrated the importance of understanding wider organisational, social and cultural aspects and this in turn has elevated the importance of qualitative methods within IS research. Until relatively recently, the method had something of a peripheral, if not pariah, status in many areas; but in recent years it has enjoyed a resurgence, and there is a growing body of lit-
erature that attests to its range of application in IS research. (Bryant, 2002, p. 28). According to Myers and Avison (2002, p. 9), grounded theory approaches are becoming increasingly common in IS research literature because the method is extremely useful in developing context-based descriptions and explanations of the phenomenon (2002, p. 9). It is also a general style of doing analysis that does not depend on particular disciplinary perspectives (Strauss, 1987) and therefore lends itself to information systems research, which can be described as a hybrid discipline (Urquhart, 2000). One of the better examples of a grounded theory approach in IS research is Orlikowski’s paper on CASE2 tools as organisational change (1993). She states that the grounded theory approach was useful because it allowed a focus on contextual elements as well as the action of key players associated with organisational change-elements that are often omitted in IS studies that rely on variance models and cross-sectional, quantitative data. (Orlikowski, 1993, p. 310) Similarly, GTM was appealing to this investigation as it sought to explore the field of enterprise systems.
The Application of GTM In adopting a GTM approach to inquiry, the Irish health system was selected as the organisational context for this investigation. Preliminary inquiry revealed that this organisation was implementing an enterprise system (using SAP), which had commenced in 1995, was one of the largest ES implementations in Western Europe, and had an estimated €130 million spent on the initiative. In order to explain the application of GTM in practice, Pandit’s (1996) five stage model is adapted by this investigation. However, before discussing these stages, let us first briefly reflect on the workings of grounded theory as a methodology.
1635
A Grounded Theory Study of Enterprise Systems Implementation
Using a grounded theory method, both secondary and primary data are collected and analysed, using three coding techniques, namely open, axial, and selective coding. GTM inquiry begins by theoretically sampling emerging data, where future directions and decisions concerning data collection and analysis are based upon prior knowledge and understanding. In GTM, both data collection and analysis occur in tandem with one another. While theoretical sampling assists with data collection, the investigator also deploys the three coding techniques to analyse the data. Open coding moves data from general phenomenon into emerging trends and concepts. In effect, data concepts are the building blocks of GTM. During open coding the investigator is “comparing” all data and assigning concepts with specific codes. The next stage involves axial coding, where the emergent concepts are classified into higher order concepts or categories. This process involves reassembling data that were fractured during open coding. Finally, selective coding takes place, where data are refined and core categories selected. These categories form the rudimentary elements of a “grounded theory.” As data collection and analysis occurs in tandem, coding is complete once the investigator reaches theoretical saturation. Saturation occurs when no new data has emerged in the field and only the same data concepts keep appearing. In order to illustrate the above cited workings of GTM in practice, five stages are now presented. While these stages are ranked from one to five, it is important to note that they are iterative and often the investigator must move systemically from one to the other throughout the investigation. The stages begin with a research design phase, followed by data collection, data gathering, and data analysis. Finally, the investigator moves to compare and contrast the extant literature to the new emergent grounded theory. Each of these phases will now be highlighted in more detail.
1636
1. )>> Research design: The objective of this phase is to develop a research focus, which is then narrowed to become a research question. According to Strauss and Corbin (1998), there are a number of methods for assisting the investigator in developing a research focus. These include: (1) the investigator can identify problems in their workplace, (2) speaking with academic faculty, where the investigator can synthesis their ideas, (3) identify problems and gaps in the literature, and (4) by entering the research field and developing a research question. Of particular relevance to this investigation was the identification of gaps in preliminary literature. A preliminary review of the enterprise systems literature revealed that the implementation of these systems was not easy. Some studies cited up to 90% failure with such systems, primarily because organisational issues were ignored or bypassed. In particular, top management support was identified as one of the most important critical factors for ensuring the successful implementation of an enterprise system. Anecdotal evidence emanating from experiential knowledge had informed the investigator as to the importance of top management support; now the academic literature was further supporting such beliefs. Consequently, a research question was formed. 2. )>> Data collection: In a GTM inquiry, the data collection phase begins when the study commences. All data is applicable and relevant to GTM. An important feature of GTM is the simultaneous collection and analysis of data. Upon entry into the field, the investigator begins to also analyse the data, looking for emerging trends. This process is referred to as “theoretical sampling” by the original authors.
A Grounded Theory Study of Enterprise Systems Implementation
Theoretical sampling is the process of data collection for generating theory whereby the analyst jointly collects, codes, and analyses his data and decides what data to collect next and where to find them, in order to develop his theory as it emerges. This process of data collection is “controlled” by the emerging theory. (Glaser & Strauss, 1967, p. 45) In addition to theoretical sampling, another feature of grounded theory is the application of the “constant” comparative method (Goulding, 2002). Constant comparison involves comparing like with like in order to look for emerging patterns and themes across the data. According to Glaser and Strauss there are four stages to the constant comparative method. These include (1) comparing incidents applicable to each category, (2) integrating categories and their properties, (3) delimiting the theory, and (4) finally writing the theory (Glaser & Strauss, 1967, p. 105). In applying data collection techniques, this investigation separated this stage into two steps. Strauss and Corbin refer to these two steps as (a) technical literature collection and (b) nontechnical literature collection. Technical literature refers to the current body of knowledge about the research topic, for example, the extant literature. As a result, this investigation conducted an initial preliminary review of the ES literature. Such a review seeks to sensitise the investigator with the emergent data. The nontechnical literature refers to the organisational evidence, that is, meeting minutes, interviews, consulting reports, observations, memo writing, and so forth. This is often referred to as “secondary” data within other qualitative methodologies, for example, case study. The nontechnical literature played a pivotal role in this investigation. As this study was engaged in a 10 year long SAP implementation, there was a huge reservoir of non-technical literature for the investigator to collect. This included consultancy reports, steering meeting minutes, project presentations, government reports, progress reports,
vendor reports, and general project specifications. On top of the huge reservoir of nontechnical literature already gathered, the investigator also conducted a series of interviews. The interview style remained unstructured, that is, questioning tended to emerge during the interview rather than leading the interview in a structured format. Each informant was therefore afforded greater scope to discuss and develop ideas during the session. Preliminary interview questions were developed from the data collected and already analysed. The approach of collecting and analysing data simultaneously greatly aided the process of data sampling and the pursuit of constant data comparisons. In GTM, each interview honed the questioning and format of the next, until finally theoretical saturation had been achieved. Initially, prior to interview commencement, key informants were informed, via e-mail, as to the nature of the research inquiry and the forthcoming interview. After interviewing key informants, theoretical sampling of the data selected future informants. Permission to use a dictaphone to record interview sessions was also sought. If informants were uncomfortable with the use of a recording mechanism, the investigator would keep notes. All interviews were written up directly after each session. This allowed the investigator to follow up on any outstanding or unclear points with the interviewee. With unstructured interviews, it was difficult to know their specific length until afterwards. However, each informant was scheduled for a 1 hour interview. If more time was required it could be arranged to followup with another interview, or over the phone at a later date. Upon conclusion of the interview, informants were thanked for their participation and given the investigator’s contact details. This allowed informants to gain access to the recorded material if they so wished or to follow-up with further comments and/or suggestions. Interviews were then transcribed directly after each meeting. The investigator also kept memos of each meeting, which in turn assisted with the process of prob-
1637
A Grounded Theory Study of Enterprise Systems Implementation
ing and questioning the data. Such an approach greatly facilitated with sharpening and focusing future interview sessions. 3. )>> Data ordering: Data ordering acts as a bridge between data collection and analysis. As this study was focusing on a complex organisation, which was implementing a system that affected the entire organisation, data management was critical. GTM inquiry also involves many approaches to data collection, which increases the need to order data. In particular, the vast array of nontechnical literature (secondary field data) available to this study meant that the investigator needed to create a systematic order for the data. Folders were created to give a hierarchical structure to the data. In all, the field data was divided into five core folders: (1) Data Analysis, (2) Diary of research, (3) Health care reports, (4) Interviews, and (5) Secondary data. The data analysis folder focused on the data analysis process, that is, open, axial, and selective coding. The diary of research folder allowed the investigator to record study memos, meeting minutes, and scheduling appointments. The third folder focused on health care reports, which provided knowledge about health care in Ireland. Interview transcripts and meetings were recorded in the fourth folder. Finally, the secondary data was placed in the fifth core folder. This folder included project reports, consultant reports, and steering committee minutes and presentations. Data ordering played an important part in assisting the investigator to analyse the data. As data collection and data analysis occur in tandem with one another, that is, theoretical sampling drives data collection, data ordering also occurs in tandem with these two phases. 4. )>> Data analysis: In tandem with data collection and ordering, the researcher is also analysing the data. Data analysis starts upon
1638
entering the field, without it theoretical sampling will not take place. GTM has devised a number of methods for analysing data. With GTM the idea is to look for patterns and reoccurring events in the data through comparing data against each other. This process is called “coding,” where interview, observational, and other data forms are broken down into distinct units of meaning which are then labelled to generate concepts. These concepts are then clustered into descriptive categories, which are later re-evaluated for their interrelationships and through a series of analytical steps are gradually subsumed into higher order categories, or a single core category, which suggests an emergent theory (Goulding, 2002). Theoretical coding involves breaking down the data into specific “units of analysis.” Each unit represents a step on the road to developing an emerging theory. In effect data are arranged in a hierarchical manner, with the eventual emergence of a grounded theory. Strauss and Corbin (1998) give us examples of the hierarchy language used for data analysis. •)>> •)>>
•)>>
•)>>
Phenomena: Central ideas in the data represented as concepts. Concepts: Involves moving from just describing what is happening in the data, to explaining the relationship between and across incidents. Categories: These are higher order concepts. Grouping concepts into categories enables the analyst to reduce the number of units with which they are working. Properties: These are the characteristics or attributes of a category. The delineation of which gives the category greater meaning.
These data analysing terms are explained better when we look briefly at the coding techniques available to the analyst. Three coding procedures
A Grounded Theory Study of Enterprise Systems Implementation
have been identified; these include open, axial, and selective coding (Strauss & Corbin, 1998). The fundamental objective of using these coding techniques is to arrive at a situation where the data is “saturated,” thus giving rise to a grounded theory. Each of these coding techniques is now briefly discussed providing examples of its application from this investigation. 1. )>> Open coding: Open coding involves breaking down the data into distinct units of meaning. Such a process allows the researcher to place specific “phenomenon” into groups, therefore giving rise to early concept development for the emerging theory. This classification of concepts into specific groups is referred to as “conceptualising” the data (Strauss & Corbin, 1998, p. 103). Following on from this the process of “abstraction” takes place, where descriptive codes and concepts are moved to a higher abstract level. Abstraction involves collapsing concepts into higher order concepts, known as categories. According to Strauss (1987) abstraction involves constantly asking theoretically relevant questions. To assist the process of abstraction, the researcher moves beyond open coding and towards axial coding techniques. With the application of open coding within the Irish health services, the investigator used two steps. First, the investigator moved through the data line-by-line, italicising, bolding, highlighting, and underling both hardcopy and electronic documents alike. This process fits with the rudimentary elements of what open coding is about. The approach proved arduous and time-consuming, but revealed a vast array of data imagery, events, words, incidents, acts, and ideas, that greatly assisted with the development of an understanding towards the research phenomenon under inquiry. Strauss and Corbin (1998) refer to these factors as the block work to building sound data concepts.
The second step involved building a “library” in Microsoft Excel. This process allowed the investigator to order data systematically (Pandit, 1996), moving data to a state of higher order concepts, or as Glaser and Strauss (1967) refer to it as, data conceptualisation. A comments column was also created in the database to allow the investigator to write notes and make comments on emerging data. Glaser and Strauss (1967) encourage investigator commentary and question raising throughout coding, believing that it aids with the process of constant comparison and theoretical sampling. 2. )>> Axial coding: Axial coding involves moving to a higher level of abstraction and is achieved by specifying relationships and delineating a core category or construct around which the other concepts revolve. It relies on a synthetic technique of making connections between subcategories to construct a more comprehensive scheme (Orlikowski, 1993, p. 315). In other words, the purpose of axial coding is to begin the process of reassembling data that were fractured during open coding. Higher level concepts known as categories are related to their subcategories to form more precise and complete explanations about phenomenon (Strauss & Corbin, 1998, p. 124). Subcategories are the same as categories except instead of standing for a phenomenon it asks questions of the phenomenon, such as when, where, who, how, and with what consequences. Such an approach gives the category greater explanatory power, therefore fitting with the idea of developing theoretical abstraction from the data (Strauss & Corbin, 1998). In applying axial coding, Strauss and Corbin support the use of story-maps or network diagrams (1998). Similarly, this investigation design a storymap to tell the story of the SAP implementation within the Irish health services. Specifically, the story-map used illustrated the strategies taken,
1639
A Grounded Theory Study of Enterprise Systems Implementation
conditions that shaped these strategies, the actions taken because of these conditions, and the consequences and outcomes of such actions. In story-mapping, axial coding reconstructs concepts “fractured” during open coding, and unites them through data abstraction. Strauss and Corbin found that story-mapping greatly assisted in data abstraction (1998). Abstraction involves grouping “concepts” to form higher order concepts or “subcategories.” After mapping the story, the investigator is able to abstract “concept groups” or “subcategories.” In effect, a number of key trends are beginning to emerge from the story under investigation. These key trends will form the bases for an emergent theory. 3. )>> Selective coding: Selective coding is the final data analysis technique. The fundamental premise of selective coding is to “refine and integrate categories” having reached a point of theoretical saturation (Strauss & Corbin, 1998, p. 143). Emerging data no longer presents any new ideas or concepts and is a repeat of data already collected and categorised. Understanding the causal relationship between data, as identified from axial coding, the researcher is able to place data concepts and subcategories into higher order emerging categories. Selective coding now refines and integrates these “emerging categories.” Data are “selected” or pulled up out of concepts and categorised. The application of selective coding revealed a number of key patterns to explain how top management support ES implementation. In effect, these key patterns tell the story of ES implementation within the Irish health services. 4. )>> Literature comparison: Finally, the literature comparison phase is the last stage in GTM inquiry. The objective at this stage is to compare the “emergent theory” to the extant literature, that is, the technical literature in the areas domain. At this stage the
1640
researcher can compare and contrast their theory with various frameworks or macro theories from the literature. There are two distinct advantages with tying the emergent theory to the extant literature, namely (1) it improves construct definitions and therefore “internal” validity, and (2) it also improves “external” validity by establishing the domain to which the study’s findings can be generalised with Pandit (1996).
Reflections on GTM in Practice This section seeks to illustrate some practical issues relating to GTM inquiry. These reflections highlight the investigators key observations after using a GTM approach. First, the importance of maintaining ethical standards is highlighted. This is followed by addressing investigator bias, the challenge of conceptualisation, and highlighting the role of literature in theory formation. Throughout this inquiry, it was vital that certain ethical standards were adhered to. Ethics are best described as the set of values or principles the investigator maintains during the investigation (Goulding, 2002). Examples of ethical considerations include client/investigator confidentiality, including people in the research who have given of their consent, only recording interviews when permitted, not considering documents unless they have been initially cleared by informants, limiting “surprise” questions within interviews by forwarding interview structure to all informants beforehand, and finally regularly communicating the research projects integrity and confidentiality when dealing with informants at all times. Once the Irish health services had agreed to this investigation, a liaison officer was appointed to assist the study. This individual played a pivotal role in ensuring all prospective informants were notified as to the nature of the study and what was expected from them.
A Grounded Theory Study of Enterprise Systems Implementation
Similarly, to ensure ethical considerations are adhered to, the investigator plays an important role in interpretivist inquiry. In fact, they are the primary means for collecting, ordering, and analysing data, therefore the investigator can never assume a value neutral stance in the relationship between theory and practice and is always implicated in the phenomenon being studied (Orlikowski & Baroudi, 1991). It is, therefore, critical that the investigator acknowledge their role in interpretivist inquiry and try to control individual bias. For this study, GTM provided the investigator with a means to overcoming challenges associated with particular biases. Through “constant comparison” of data, the investigator is able to state their assumptions about the data, which in turn can be compared and contrasted to emerging data. This process allows individual biases to be either validated or rejected by the data. As a result, investigator biases are minimised through empirical rigor and constant comparison of data. However, while investigator biases can be minimised, it is worth noting that GTM inquiries involve considerable conceptualisation of data (Strauss & Corbin, 1998). This process, as the original authors note, requires a considerable degree of openness to the data (Glaser & Strauss, 1967), which is best fostered through a “creative theoretical imagination” (Strauss & Corbin, 1998). Essentially, the investigator is moving data “concepts” to higher order concepts, which in turn become “categories.” Categories form the rudimentary elements of a “grounded theory.” However, in order to derive categories, the investigator must conceptualise data. Herein lies the challenge for GTM inquiries, which may be accused of lacking scientific (deductive) rigor, based instead around subjective (inductive) assumptions. Yet, Strauss and Corbin (1998) note that the constant interplay of data in GTM involves both deductive and inductive approaches to inquiry. By “constantly comparing” emergent data, the investigator is deductively conceptualising data, whereas “theoretical sampling” is
inductively selecting data deduced from earlier constant comparison. Furthermore, this study found Strauss and Corbin’s (1998) approach to GTM quite systematic and rigorous in its approach to data analysis. This point is further evidenced by Smit and Bryant’s (2000) study, where their findings revealed a majority preference for Strauss and Corbin’s approach rather than Glaser’s approach to GTM within the IS literature. Similarly, the authors note the systematic and thorough nature of Strauss and Corbin’s approach to GTM. However, reflecting on GTM in practice, this study believes that Strauss and Corbin’s (1998) approach to GTM could be further enriched by the use of multiple methodologies to generate deeper insights into the emergent theory. This point is supported by Gasson (2004), who noted that a holistic view of any research question requires multiple approaches, as the selection of a research strategy entails a tradeoff: the strengths of one approach overcome the weaknesses in another approach and vice versa. This in itself is a powerful argument for pluralism and for the use of multiple research approaches during any investigation. (Gasson, 2004, p. 99) Examples of other methods of inductive coding include, “discourse analysis, soft systems conceptual models, process modelling, and inductive categorisation” (Gasson, 2004). These tools would sharpen analysis allowing the investigator to further compare and contrast data. Finally, reflecting on the role of extant literature with the emergent theory, this investigation disagrees with Glaser’s (1992) refusal to review current knowledge when conducing GTM inquiry. Instead, this investigation concurs with Strauss and Corbin’s (1998) view that the literature can greatly assist the investigator in shaping their understanding of the research phenomenon. In fact, the authors view the literature as another form of empirical data; therefore isolating review until primary data collection has occurred does little to advance knowledge and theoretical development. For this investigation, while a lack of empirical knowledge existed within the extant literature,
1641
A Grounded Theory Study of Enterprise Systems Implementation
preliminary review prior to field data collection greatly assisted the investigator in understanding data concepts.
Future Research Directions Future research will seek to extend the current study in a number of directions. First, as this investigation seeks to build a “grounded theory,” the emergence of such will need further empirical validation and elaboration. Exploratory studies build “substantive theories,” that is, theories that are substantial to the case under inquiry. These theories require further exploration and eventual generalisation. Consequently, future investigations might conduct more quantitative or positivistic inquiries, using multiple case studies or survey instruments, to refine and test this “emergent” theory. Second, this study was focused on the Irish health system. Future inquiries could examine other health systems internationally. This would allow for a comparative analysis of results, which in turn would enable practitioners to broaden their knowledge. Similarly, this study could also be extended to other public and private sector organisations within Ireland, and indeed internationally. Questions such as how other private/ public sector industries shape top management support could be asked. Finally, future research needs to focus on all critical success factors for enterprise systems implementation. This study focuses on top management support, but studies have identified other factors that are also important for project success. For example, future investigations could focus on issues such as project team competence, the role of the project champion, the role of management and technical consultants during implementation, change management perspectives throughout implementing organisations, relationship management between vendors and their clients, and data integrity and conversion to new system.
1642
Conclusion Reflecting on GTM as an approach to empirical inquiry, the investigator must be clear as to their ontological worldview and their corresponding epistemology. These views, in turn, will determine their methodological choice. As this study sought to explore the field of enterprise systems, an interpretivist stance was adopted. Such a stance seeks to build theory by interpreting the world through the words and behaviours of the social actors in it. While initially much of the language of scientific endeavour belonged to the world of positivism, many IS investigations are now acknowledging the depth and applicability of interpretivist inquiry, particularly for studies focusing on complex systems, social, and organisational issues. Such inquiry is particularly suited to IS initiatives in complex organisations such as health care. Finally, in describing how this investigation conducted its GTM inquiry, five stages are outlined. The chapter first begins by explaining the research design process. The data collection, ordering, and analysis phases are then explained. Finally, the comparison of literature to the emergent theory is considered. While many of these phases occurred in tandem, they are separated in this chapter for explanatory purposes only. The grounded theory method allows the investigator to deploy a multitude of techniques for data collection, for example memo-taking, secondary data collection, and interviewing. Data are analysed via constantly comparing and theoretically sampling all emerging concepts. Strauss and Corbin (1998) outline three coding techniques for data analysis. Following these three coding techniques, the investigation allows data concepts to emerge. These concepts, as Strauss and Corbin (1998) stated, form the “building blocks of a grounded theory.” In turn, these concepts are developed during coding and selected to form an emergent theory grounded in data.
A Grounded Theory Study of Enterprise Systems Implementation
References Al-Mashari, M. (2000). Constructs of process change management in ERP context: A focus on SAP R/3. In Americas Conference on Information Systems, Long Beach, California. Bancroft, N., Seip, H., & Sprengel, A. (1998). Implementing SAP R/3: How to introduce a large system into a large organisation. Greenwich, CT: Manning Publications. Bhattacherjee, A. (2000). Beginning SAP R/3 implementation at Geneva Pharmaceuticals. Communications of the AIS. Brown, C. V., & Vessey, I. (1999). ERP implementation approaches: Toward a contingency framework. In International Conference on Information Systems (ICIS), Charlotte, North Carolina.
Francesconi, T. (1998). Transforming Lucent’s CFO. Management Accounting, 80(1), 22–30. Gasson, S. (2004). Rigor in grounded theory research: An interpretive perspective on generating theory from qualitative field studies. In M. E. Whitman & A. B. Woszczynski (Eds.), The handbook of information systems research (pp. 79-102). Idea Group Publishing. Glaser, B. G., & Strauss, A. (1967). The discovery of grounded theory: Strategies for qualitative research. New York: Aldine de Gruyter. Goulding, C. (2002). Grounded theory: A practical guide for management, business, and market researchers. Thousand Oaks, CA, London: Sage Publications.
Bryant, A. (2002). Re-grounding grounded theory. Journal of Information Technology Theory and Application, 4(1), 25–42.
Greatbatch, D., Murphy, E., & Dingwall, R. (2001). Evaluating medical information systems: Ethnomethodological and interactionist approaches. Health Services Management Research, 14(3), 181–191. doi:10.1258/0951484011912681
Charmaz, K. (1983). The grounded theory method: An explication and interpretation. In R. Emerson (Ed.), Contemporary field research: A collection of readings. Boston, MA: Little Brown.
Helm, S., Hall, M., & Hall, C. (2003). Pre-implementation attitudes and organisational readiness for implementing an ERP system. European Journal of Information Systems, 146.
Crowe, T. J., Zayas-Castro, J. L., & Vanichsenee, S. (2002). Readiness assessment for enterprise resource planning. In the 11th International Conference on Management of Technology (IAMOT2002).
Hirschheim, R., & Newman, M. (2002). Symbolism and information systems development: Myth, metaphor, and magic. In M. D. Myers & D. Avison (Eds.), Qualitative research in information systems: A reader. Thousand Oaks, CA, London: Sage Publications.
Davenport, T. H. (2000). Mission critical: Realizing the promise of enterprise systems. Boston, MA: Harvard Business School Press. Deloitte & Touche (2001, June). Value for money audit of the Irish health system. Government of Ireland. Eisenhardt, K. M. (1989). Building theories from case study research. Academy of Management Review, 14(4), 532–550. doi:10.2307/258557
Jesitus, J. (1998). Even farmers get SAPed. Industry Week, 247(5), 32–36. Khazanchi, D., & Munkvold, B. (2000). Is information systems a science? An inquiry into the nature of the information systems discipline. The Data Base for Advances in Information Systems, 31(3), 24–40.
1643
A Grounded Theory Study of Enterprise Systems Implementation
Kraemmergaard, P., & Moller, C. (2000). A research framework for studying the implementation of enterprise resource planning (ERP) systems. IRIS 23. Laboratorium for Interaction Technology, University of Trollhattan Uddevalla. Markus, M. L., & Tanis, C. (2000). The enterprise systems experience-from adoption to success. In R. W. Zmud (Ed.), Framing the domains of IT research: Glimpsing the future through the past (pp. 173-207). Cincinnati, OH: Pinnaflex Educational Resources, Inc. Myers, M. D., & Avison, D. (Eds.). (2002). Qualitative research in information systems: A reader. London: Sage Publications Ltd. Orlikowski, W. J. (1993). CASE tools as organisational change: Investigating incremental and radical changes in systems development. Management Information Systems Quarterly, 17(3), 309–340. doi:10.2307/249774 Orlikowski, W. J., & Baroudi, J. J. (1991). Studying information technology in organisations: Research approaches and assumptions. Information Systems Research, 2(1), 1–28. doi:10.1287/isre.2.1.1 Pandit, N. (1996). The creation of theory: A recent application of the grounded theory method. Qualitative Report, 2(4). Parr, A., & Shanks, G. (2000). A model of ERP project implementation. Journal of Information Technology, 15, 289–303. doi:10.1080/02683960010009051 Perry, C., Riege, A., & Brown, L. (1999). Realism’s role among scientific paradigms in marketing research. Irish Marketing Review, 12(2). Sammon, D., Adam, F., & Elichirigoity, F. (2001b). ERP dreams and sound business rationale. In Seventh Americas Conference on Information Systems.
1644
Skok, W., & Legge, M. (2002). Evaluating enterprise resourc